perm filename MSG1.MSG[JNK,JMC]1 blob
sn#729855 filedate 1983-11-10 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00270 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00034 00002 ∂18-Aug-83 2138 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #20
C00053 00003 ∂19-Aug-83 0741 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #21
C00077 00004 ∂19-Aug-83 1551 rita@su-score [Rita Leibovitz <RITA@Score>: Accepted Our Offer Ph.D./MS]
C00087 00005 ∂19-Aug-83 1927 LAWS@SRI-AI.ARPA AIList Digest V1 #43
C00116 00006 ∂21-Aug-83 0014 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #22
C00122 00007 ∂21-Aug-83 1443 larson@Shasta Alan Borning on Computer Reliability and Nuclear War
C00124 00008 ∂22-Aug-83 1145 LAWS@SRI-AI.ARPA AIList Digest V1 #44
C00158 00009 ∂22-Aug-83 1347 LAWS@SRI-AI.ARPA AIList Digest V1 #45
C00180 00010 ∂22-Aug-83 1632 ELYSE@SU-SCORE.ARPA Your current address, Visitors
C00182 00011 ∂22-Aug-83 1650 ELYSE@SU-SCORE.ARPA Chairman at North Carolina University
C00183 00012 ∂23-Aug-83 1228 LAWS@SRI-AI.ARPA AIList Digest V1 #46
C00204 00013 ∂24-Aug-83 0852 TAJNAI@SU-SCORE.ARPA My talk for Japan
C00206 00014 ∂24-Aug-83 1130 JF@SU-SCORE.ARPA student support
C00212 00015 ∂24-Aug-83 1206 LAWS@SRI-AI.ARPA AIList Digest V1 #47
C00240 00016 ∂24-Aug-83 1321 BSCOTT@SU-SCORE.ARPA Re: student support
C00242 00017 ∂24-Aug-83 1847 BRODER@SU-SCORE.ARPA AFLB
C00244 00018 ∂25-Aug-83 0755 GOLUB@SU-SCORE.ARPA Brooks Vote
C00246 00019 ∂25-Aug-83 1057 LAWS@SRI-AI.ARPA AIList Digest V1 #48
C00265 00020 ∂25-Aug-83 1444 BRODER@SU-SCORE.ARPA ISL Seminar
C00268 00021 ∂25-Aug-83 1525 BRODER@SU-SCORE.ARPA Duplication of messages
C00269 00022 ∂26-Aug-83 1339 GOLUB@SU-SCORE.ARPA acting chairman
C00270 00023 ∂29-Aug-83 1311 LAWS@SRI-AI.ARPA AIList Digest V1 #49
C00294 00024 ∂29-Aug-83 1458 @SU-SCORE.ARPA:RFN@SU-AI
C00298 00025 ∂30-Aug-83 1143 LAWS@SRI-AI.ARPA AIList Digest V1 #50
C00316 00026 ∂30-Aug-83 1825 LAWS@SRI-AI.ARPA AIList Digest V1 #51
C00346 00027 ∂02-Sep-83 1043 LAWS@SRI-AI.ARPA AIList Digest V1 #53
C00368 00028 ∂02-Sep-83 1625 SCHMIDT@SUMEX-AIM LMI Window System Manual
C00369 00029 ∂03-Sep-83 0016 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #23
C00376 00030 ∂04-Sep-83 2246 @SU-SCORE.ARPA:reid@Glacier public picking on fellow faculty members
C00380 00031 ∂06-Sep-83 0020 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #24
C00390 00032 ∂06-Sep-83 0630 REGES@SU-SCORE.ARPA The new CS 105 A & B
C00394 00033 ∂07-Sep-83 0013 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #25
C00401 00034 ∂09-Sep-83 1242 @MIT-MC:AUSTIN@DEC-MARLBORO DISTRIBUTION LIST MEMBERSHIP
C00402 00035 ∂09-Sep-83 1317 LAWS@SRI-AI.ARPA AIList Digest V1 #54
C00432 00036 ∂09-Sep-83 1628 LAWS@SRI-AI.ARPA AIList Digest V1 #55
C00454 00037 ∂09-Sep-83 1728 LAWS@SRI-AI.ARPA AIList Digest V1 #56
C00473 00038 ∂10-Sep-83 0017 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #26
C00487 00039 ∂13-Sep-83 1349 @SU-SCORE.ARPA:CAB@SU-AI hives, smoke, etc.
C00489 00040 ∂14-Sep-83 2203 @SU-SCORE.ARPA:ROD@SU-AI Departmental Lecture Series
C00491 00041 ∂15-Sep-83 1314 ELYSE@SU-SCORE.ARPA Updating of Faculty Interests for 83-84
C00493 00042 ∂15-Sep-83 1320 LENAT@SU-SCORE.ARPA Colloquium
C00495 00043 ∂15-Sep-83 1354 @SU-SCORE.ARPA:TW@SU-AI
C00496 00044 ∂15-Sep-83 1511 cheriton%SU-HNV.ARPA@SU-SCORE.ARPA Re: Colloquium
C00498 00045 ∂15-Sep-83 2007 LAWS@SRI-AI.ARPA AIList Digest V1 #57
C00524 00046 ∂16-Sep-83 1326 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #27
C00533 00047 ∂16-Sep-83 1714 LAWS@SRI-AI.ARPA AIList Digest V1 #58
C00557 00048 ∂19-Sep-83 1143 REGES@SU-SCORE.ARPA Research support for new PhD students
C00560 00049 ∂19-Sep-83 1751 LAWS@SRI-AI.ARPA AIList Digest V1 #59
C00586 00050 ∂20-Sep-83 1045 ELYSE@SU-SCORE.ARPA Faculty Meeting Next Week
C00588 00051 ∂20-Sep-83 1121 LAWS@SRI-AI.ARPA AIList Digest V1 #60
C00610 00052 ∂20-Sep-83 1735 GOLUB@SU-SCORE.ARPA Faculty meetings
C00611 00053 ∂20-Sep-83 1757 GOLUB@SU-SCORE.ARPA Appointment
C00612 00054 ∂21-Sep-83 0837 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #28
C00619 00055 ∂21-Sep-83 1419 rita@su-score CSMS Update
C00624 00056 ∂21-Sep-83 1619 GOLUB@SU-SCORE.ARPA Reception
C00625 00057 ∂22-Sep-83 1018 GOLUB@SU-SCORE.ARPA Registration
C00626 00058 ∂22-Sep-83 1847 LAWS@SRI-AI.ARPA AIList Digest V1 #61
C00654 00059 ∂22-Sep-83 2332 GOLUB@SU-SCORE.ARPA IBM relations
C00655 00060 ∂23-Sep-83 0827 @SU-SCORE.ARPA:RINDFLEISCH@SUMEX-AIM.ARPA Re: IBM relations
C00657 00061 ∂23-Sep-83 0933 cheriton%SU-HNV.ARPA@SU-SCORE.ARPA Re: IBM relations
C00660 00062 ∂23-Sep-83 1133 @SU-SCORE.ARPA:ROD@SU-AI Re: IBM relations
C00662 00063 ∂23-Sep-83 1216 GOLUB@SU-SCORE.ARPA Alumni letter
C00664 00064 ∂23-Sep-83 1236 SCHMIDT@SUMEX-AIM mouse will be in the shop this afternoon
C00665 00065 ∂23-Sep-83 2010 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #29
C00684 00066 ∂24-Sep-83 1354 lantz%SU-HNV.ARPA@SU-SCORE.ARPA Re: IBM relations
C00687 00067 ∂25-Sep-83 1147 WIEDERHOLD%SUMEX-AIM.ARPA@SU-SCORE.ARPA Re: IBM relations
C00689 00068 ∂25-Sep-83 1736 LAWS@SRI-AI.ARPA AIList Digest V1 #62
C00716 00069 ∂25-Sep-83 2055 LAWS@SRI-AI.ARPA AIList Digest V1 #63
C00740 00070 ∂26-Sep-83 0605 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #30
C00753 00071 ∂26-Sep-83 0913 ELYSE@SU-SCORE.ARPA Letter from Baudoin
C00758 00072 ∂26-Sep-83 0936 SCHREIBER@SU-SCORE.ARPA Where
C00759 00073 ∂26-Sep-83 0949 SHARON@SU-SCORE.ARPA Prof. Misra
C00760 00074 ∂26-Sep-83 1012 @SU-SCORE.ARPA:REG@SU-AI
C00762 00075 ∂26-Sep-83 1419 @SU-SCORE.ARPA:reid@Glacier Re: Where
C00764 00076 ∂26-Sep-83 1436 ELYSE@SU-SCORE.ARPA Agenda for Faculty Meeting Tomorrow
C00767 00077 ∂26-Sep-83 1536 @SU-SCORE.ARPA:OR.STEIN@SU-SIERRA.ARPA Re: Colloquium
C00769 00078 ∂26-Sep-83 1540 @SU-SCORE.ARPA:FY@SU-AI reception at Don Knuth's home
C00771 00079 ∂26-Sep-83 2348 LAWS@SRI-AI.ARPA AIList Digest V1 #64
C00790 00080 ∂27-Sep-83 1053 GOLUB@SU-SCORE.ARPA today's meeting
C00791 00081 ∂27-Sep-83 1552 @SU-SCORE.ARPA:FY@SU-AI department-wide reception
C00793 00082 ∂27-Sep-83 1740 GOLUB@SU-SCORE.ARPA Wirth's visit
C00795 00083 ∂28-Sep-83 0755 rita@su-score [Rita Leibovitz <RITA@Score>: Accepted Our Offer Ph.D./MS]
C00805 00084 ∂28-Sep-83 1557 @SU-SCORE.ARPA:DEK@SU-AI Lemons
C00807 00085 ∂29-Sep-83 1120 LAWS@SRI-AI.ARPA AIList Digest V1 #65
C00831 00086 ∂29-Sep-83 1438 LAWS@SRI-AI.ARPA AIList Digest V1 #66
C00856 00087 ∂29-Sep-83 1610 LAWS@SRI-AI.ARPA AIList Digest V1 #67
C00880 00088 ∂29-Sep-83 1910 BRODER@SU-SCORE.ARPA First AFLB talk this year
C00883 00089 ∂29-Sep-83 2035 @SU-SCORE.ARPA:YM@SU-AI Terminals
C00885 00090 ∂30-Sep-83 0625 reid%SU-SHASTA.ARPA@SU-SCORE.ARPA number of graduating students
C00887 00091 ∂30-Sep-83 1049 CLT SEMINAR IN LOGIC AND FOUNDATIONS
C00889 00092 ∂30-Sep-83 1646 ELYSE@SU-SCORE.ARPA Niklaus Wirth Visit on Tuesday
C00890 00093 ∂30-Sep-83 2146 LENAT@SU-SCORE.ARPA Attendance at Colloquium
C00893 00094 ∂01-Oct-83 0822 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #31
C00901 00095 ∂01-Oct-83 1801 GOLUB@SU-SCORE.ARPA reception
C00902 00096 ∂01-Oct-83 1804 GOLUB@SU-SCORE.ARPA Meeting
C00903 00097 ∂01-Oct-83 1808 GOLUB@SU-SCORE.ARPA Dinner for Wirth
C00904 00098 ∂03-Oct-83 1104 LAWS@SRI-AI.ARPA AIList Digest V1 #68
C00929 00099 ∂03-Oct-83 1255 LAWS@SRI-AI.ARPA AIList Digest V1 #69
C00958 00100 ∂03-Oct-83 1550 GOLUB@SU-SCORE.ARPA meeting
C00959 00101 ∂03-Oct-83 1558 GOLUB@SU-SCORE.ARPA lunch
C00960 00102 ∂03-Oct-83 1636 larson@Shasta Implications of accepting DOD funding
C00964 00103 ∂03-Oct-83 1907 LAWS@SRI-AI.ARPA AIList Digest V1 #70
C00995 00104 ∂05-Oct-83 1327 BRODER@SU-SCORE.ARPA First AFLB talk this year
C00997 00105 ∂05-Oct-83 1353 GOLUB@SU-SCORE.ARPA Attendance at Tenured Faculty Meetings
C00999 00106 ∂05-Oct-83 1717 GOLUB@SU-SCORE.ARPA committee assignments
C01000 00107 ∂05-Oct-83 1717 GOLUB@SU-SCORE.ARPA Course proliferation
C01002 00108 ∂06-Oct-83 0025 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #32
C01020 00109 ∂06-Oct-83 1525 LAWS@SRI-AI.ARPA AIList Digest V1 #71
C01043 00110 ∂06-Oct-83 2023 LENAT@SU-SCORE.ARPA Fuzzy Lunch
C01044 00111 ∂07-Oct-83 2127 REGES@SU-SCORE.ARPA Use of CSD machines for coursework
C01048 00112 ∂08-Oct-83 0025 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #33
C01061 00113 ∂08-Oct-83 1745 BRODER@SU-SCORE.ARPA Speaker needed
C01062 00114 ∂09-Oct-83 0852 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #34
C01073 00115 ∂10-Oct-83 1544 GOLUB@SU-SCORE.ARPA Exciting application
C01074 00116 ∂10-Oct-83 1623 LAWS@SRI-AI.ARPA AIList Digest V1 #72
C01096 00117 ∂10-Oct-83 2157 LAWS@SRI-AI.ARPA AIList Digest V1 #73
C01125 00118 ∂11-Oct-83 1013 GOLUB@SU-SCORE.ARPA Today's lunch
C01126 00119 ∂11-Oct-83 1534 SCHMIDT@SUMEX-AIM LM-2 down Thursday 8am - noon
C01127 00120 ∂11-Oct-83 1539 SCHMIDT@SUMEX-AIM Symbolics chrome
C01128 00121 ∂11-Oct-83 1749 BRODER@SU-SCORE.ARPA Next AFLB talk(s)
C01132 00122 ∂11-Oct-83 1950 LAWS@SRI-AI.ARPA AIList Digest V1 #74
C01156 00123 ∂12-Oct-83 0022 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #35
C01171 00124 ∂12-Oct-83 1333 @SU-SCORE.ARPA:yao.pa@PARC-MAXC.ARPA Re: Next AFLB talk(s)
C01172 00125 ∂12-Oct-83 1827 LAWS@SRI-AI.ARPA AIList Digest V1 #75
C01199 00126 ∂13-Oct-83 0828 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #36
C01224 00127 ∂13-Oct-83 0902 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #36
C01249 00128 ∂13-Oct-83 1439 GOLUB@SU-SCORE.ARPA teaching obligations
C01251 00129 ∂13-Oct-83 1804 LAWS@SRI-AI.ARPA AIList Digest V1 #76
C01273 00130 ∂14-Oct-83 0224 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #37
C01287 00131 ∂14-Oct-83 1020 CLT SEMINAR IN LOGIC AND FOUNDATIONS
C01289 00132 ∂14-Oct-83 1113 ELYSE@SU-SCORE.ARPA Announcement of DoD-University Program for 1984/85
C01290 00133 ∂14-Oct-83 1545 LAWS@SRI-AI.ARPA AIList Digest V1 #77
C01317 00134 ∂14-Oct-83 2049 LAWS@SRI-AI.ARPA AIList Digest V1 #78
C01344 00135 ∂15-Oct-83 1036 CLT SEMINAR IN LOGIC AND FOUNDATIONS
C01346 00136 ∂16-Oct-83 1501 BRODER@SU-SCORE.ARPA Next AFLB talk(s)
C01350 00137 ∂17-Oct-83 0120 LAWS@SRI-AI.ARPA AIList Digest V1 #79
C01371 00138 ∂17-Oct-83 0221 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #38
C01383 00139 ∂17-Oct-83 1541 SCHMIDT@SUMEX-AIM.ARPA LM-2 unavailable Tuesday morning (10/18)
C01385 00140 ∂18-Oct-83 0219 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #39
C01392 00141 ∂18-Oct-83 0905 GOLUB@SU-SCORE.ARPA Lunch
C01393 00142 ∂18-Oct-83 0913 GOLUB@SU-SCORE.ARPA Library Keys
C01396 00143 ∂18-Oct-83 1022 LIBRARY@SU-SCORE.ARPA Library Key Policy
C01403 00144 ∂18-Oct-83 1131 pratt%SU-NAVAJO.ARPA@SU-SCORE.ARPA security
C01404 00145 ∂18-Oct-83 1450 @SU-SCORE.ARPA:JMC@SU-AI bureaucrary wins
C01405 00146 ∂18-Oct-83 2257 @SU-SCORE.ARPA:JMC@SU-AI
C01406 00147 ∂18-Oct-83 2254 @SU-SCORE.ARPA:JMC@SU-AI
C01407 00148 ∂19-Oct-83 0818 LIBRARY@SU-SCORE.ARPA Reply to McCarthy and Keller concerning Library Services
C01415 00149 ∂19-Oct-83 0937 @SU-SCORE.ARPA:JMC@SU-AI
C01417 00150 ∂19-Oct-83 1003 GOLUB@SU-SCORE.ARPA Thanks
C01418 00151 ∂19-Oct-83 1004 cheriton%SU-HNV.ARPA@SU-SCORE.ARPA Re: Reply to McCarthy and Keller concerning Library Services
C01421 00152 ∂19-Oct-83 1608 SCHREIBER@SU-SCORE.ARPA Library
C01422 00153 ∂19-Oct-83 1611 SCHREIBER@SU-SCORE.ARPA NA Seminar
C01423 00154 ∂19-Oct-83 1622 @SU-SCORE.ARPA:TOB@SU-AI
C01426 00155 ∂19-Oct-83 2305 @SU-SCORE.ARPA:FEIGENBAUM@SUMEX-AIM.ARPA Re: Library
C01428 00156 ∂20-Oct-83 0214 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #40
C01434 00157 ∂20-Oct-83 1120 ELYSE@SU-SCORE.ARPA Message about Visiting Scholar Cards - from Gene H. golub
C01436 00158 ∂20-Oct-83 1158 LIBRARY@SU-SCORE.ARPA Math/CS Library and Electronic Messaging
C01440 00159 ∂20-Oct-83 1541 LAWS@SRI-AI.ARPA AIList Digest V1 #80
C01471 00160 ∂20-Oct-83 1555 CLT SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
C01472 00161 ∂21-Oct-83 0241 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #41
C01491 00162 ∂21-Oct-83 1510 @SU-SCORE.ARPA:WIEDERHOLD@SUMEX-AIM.ARPA Re: Math/CS Library Security vs. no key policy
C01494 00163 ∂24-Oct-83 1255 LAWS@SRI-AI.ARPA AIList Digest V1 #81
C01514 00164 ∂24-Oct-83 1517 FISCHLER@SRI-AI.ARPA Add to Mailing List
C01515 00165 ∂24-Oct-83 2139 JF@SU-SCORE.ARPA november bats
C01516 00166 ∂24-Oct-83 2225 BRODER@SU-SCORE.ARPA Next AFLB talk(s)
C01519 00167 ∂25-Oct-83 1400 @SRI-AI.ARPA:desRivieres.PA@PARC-MAXC.ARPA CSLI Activities for Thursday Oct. 27th
C01522 00168 ∂25-Oct-83 1413 @SRI-AI.ARPA:TW@SU-AI This week's talkware seminar - Greg Nelson - Durand 401
C01525 00169 ∂25-Oct-83 1417 @SRI-AI.ARPA:TW@SU-AI next week's talkware - Nov 1 TUESDAY - K. Nygaard
C01528 00170 ∂25-Oct-83 1518 @SRI-AI.ARPA:GOGUEN@SRI-CSL [GOGUEN at SRI-CSL: rewrite rule seminar]
C01533 00171 ∂25-Oct-83 1551 ELYSE@SU-SCORE.ARPA Newsletter
C01534 00172 ∂25-Oct-83 1610 PETERS@SRI-AI.ARPA Meeting this Friday
C01535 00173 ∂25-Oct-83 1629 BRODER@SU-SCORE.ARPA Abstract of T. C. Hu's talk
C01537 00174 ∂25-Oct-83 1646 BRODER@SU-SCORE.ARPA Special AFLB talk!
C01541 00175 ∂25-Oct-83 1911 JF@SU-SCORE.ARPA testing
C01542 00176 ∂25-Oct-83 1917 @SRI-AI.ARPA:GOGUEN@SRI-CSL correction to rewrite rule seminar date
C01543 00177 ∂26-Oct-83 0227 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #42
C01554 00178 ∂26-Oct-83 1025 ELYSE@SU-SCORE.ARPA Visitor from Marks and Sparks
C01555 00179 ∂26-Oct-83 1338 @SRI-AI.ARPA:TW@SU-AI WHOOPS! Talkware seminar is in 380Y today, not Durand
C01556 00180 ∂26-Oct-83 1429 GOLUB@SU-SCORE.ARPA next meeting
C01557 00181 ∂26-Oct-83 1614 LAWS@SRI-AI.ARPA AIList Digest V1 #82
C01585 00182 ∂26-Oct-83 1637 @SRI-AI.ARPA:desRivieres.PA@PARC-MAXC.ARPA 2 PM Computer Languages Seminar CANCELLED tomorrow
C01586 00183 ∂26-Oct-83 1638 @MIT-MC:MAREK%MIT-OZ@MIT-MC Re: Parallelism and Consciousness
C01588 00184 ∂26-Oct-83 1905 DKANERVA@SRI-AI.ARPA Newsletter No. 6, October 27, 1983
C01617 00185 ∂27-Oct-83 0859 @SU-SCORE.ARPA:OR.STEIN@SU-SIERRA.ARPA Re: Newsletter
C01619 00186 ∂27-Oct-83 1448 @SU-SCORE.ARPA:YM@SU-AI Town Meetings
C01621 00187 ∂27-Oct-83 1859 LAWS@SRI-AI.ARPA AIList Digest V1 #83
C01639 00188 ∂28-Oct-83 0042 TYSON@SRI-AI.ARPA Using the Imagen Laser Printer
C01647 00189 ∂28-Oct-83 0218 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #43
C01655 00190 ∂28-Oct-83 0810 KJB@SRI-AI.ARPA Alfred Tarski
C01656 00191 ∂28-Oct-83 1209 GOLUB@SU-SCORE.ARPA KEYS to MJH
C01658 00192 ∂28-Oct-83 1310 @SU-SCORE.ARPA:MACKINLAY@SUMEX-AIM.ARPA Re: KEYS to MJH
C01660 00193 ∂28-Oct-83 1402 LAWS@SRI-AI.ARPA AIList Digest V1 #84
C01682 00194 ∂29-Oct-83 0201 @SRI-AI.ARPA:Bush@SRI-KL.ARPA Dennis Klatt seminar
C01685 00195 ∂29-Oct-83 1049 CLT SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
C01687 00196 ∂29-Oct-83 1059 @SRI-AI.ARPA:CLT@SU-AI SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
C01689 00197 ∂30-Oct-83 1142 ALMOG@SRI-AI.ARPA reminder on why context wont go away
C01691 00198 ∂30-Oct-83 1241 KJB@SRI-AI.ARPA Visit by Glynn Winskel
C01694 00199 ∂30-Oct-83 1730 @SRI-AI.ARPA:BrianSmith.pa@PARC-MAXC.ARPA Request
C01696 00200 ∂30-Oct-83 2310 GOLUB@SU-SCORE.ARPA Position at ONR-London
C01698 00201 ∂31-Oct-83 0901 SCHREIBER@SU-SCORE.ARPA Talk today
C01700 00202 ∂31-Oct-83 1003 HANS@SRI-AI.ARPA Re: Request
C01705 00203 ∂31-Oct-83 1006 @SU-SCORE.ARPA:Guibas.pa@PARC-MAXC.ARPA Re: Talk today
C01706 00204 ∂31-Oct-83 1032 RPERRAULT@SRI-AI.ARPA Re: Request
C01708 00205 ∂31-Oct-83 1103 KJB@SRI-AI.ARPA Re: Request
C01710 00206 ∂31-Oct-83 1207 KJB@SRI-AI.ARPA Committee assignments (first pass)
C01716 00207 ∂31-Oct-83 1445 LAWS@SRI-AI.ARPA AIList Digest V1 #85
C01746 00208 ∂31-Oct-83 1951 LAWS@SRI-AI.ARPA AIList Digest V1 #86
C01774 00209 ∂31-Oct-83 1959 GOLUB@SU-SCORE.ARPA meeting
C01775 00210 ∂31-Oct-83 2346 BRODER@SU-SCORE.ARPA AFLB remainder
C01776 00211 ∂01-Nov-83 0233 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #44
C01799 00212 ∂01-Nov-83 1339 BRODER@SU-SCORE.ARPA Next AFLB talk(s)
C01803 00213 ∂01-Nov-83 1415 GOLUB@SU-SCORE.ARPA lunches
C01804 00214 ∂01-Nov-83 1449 RPERRAULT@SRI-AI.ARPA Winskel lectures
C01812 00215 ∂01-Nov-83 1615 LIBRARY@SU-SCORE.ARPA Integration the VLSI Journal--recommendations?
C01814 00216 ∂01-Nov-83 1649 LAWS@SRI-AI.ARPA AIList Digest V1 #87
C01844 00217 ∂02-Nov-83 0939 @SRI-AI.ARPA:desRivieres.PA@PARC-MAXC.ARPA CSLI Activities for Thursday Nov. 3rd
C01847 00218 ∂02-Nov-83 0955 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #45
C01871 00219 ∂02-Nov-83 1727 @SRI-AI.ARPA:YM@SU-AI Knowledge Seminar
C01874 00220 ∂02-Nov-83 2049 @SRI-AI.ARPA:ADavis@SRI-KL.ARPA Center for Language Mailing List
C01875 00221 ∂02-Nov-83 2109 CLT SPECIAL ANNOUNCEMENT
C01877 00222 ∂03-Nov-83 0020 @SRI-AI.ARPA:vardi%SU-HNV.ARPA@SU-SCORE.ARPA Knowledge Seminar
C01880 00223 ∂03-Nov-83 0224 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #46
C01894 00224 ∂03-Nov-83 0901 DKANERVA@SRI-AI.ARPA Newsletter No. 7, November 3, 1983
C01920 00225 ∂03-Nov-83 0952 DKANERVA@SRI-AI.ARPA
C01925 00226 ∂03-Nov-83 1048 RIGGS@SRI-AI.ARPA Temporary Housing Offer
C01927 00227 ∂03-Nov-83 1624 BRODER@SU-SCORE.ARPA Puzzle
C01929 00228 ∂03-Nov-83 1710 LAWS@SRI-AI.ARPA AIList Digest V1 #88
C01948 00229 ∂03-Nov-83 1826 @SU-SCORE.ARPA:JMC@SU-AI
C01949 00230 ∂03-Nov-83 2008 CLT SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
C01951 00231 ∂03-Nov-83 2114 @SRI-AI.ARPA:CLT@SU-AI SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
C01953 00232 ∂04-Nov-83 0029 LAWS@SRI-AI.ARPA AIList Digest V1 #89
C01977 00233 ∂04-Nov-83 0222 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #47
C01991 00234 ∂04-Nov-83 0900 KJB@SRI-AI.ARPA announcments and items for newsletter
C01993 00235 ∂04-Nov-83 1438 HANS@SRI-AI.ARPA Job in Konstanz/Germany
C01995 00236 ∂04-Nov-83 1536 HANS@SRI-AI.ARPA csli mail
C01999 00237 ∂04-Nov-83 1943 @SRI-AI.ARPA:vardi%SU-HNV.ARPA@SU-SCORE.ARPA Knowledge Seminar
C02002 00238 ∂05-Nov-83 0107 LAWS@SRI-AI.ARPA AIList Digest V1 #90
C02021 00239 ∂05-Nov-83 1505 KJB@SRI-AI.ARPA Committee Assignments
C02025 00240 ∂05-Nov-83 1513 KJB@SRI-AI.ARPA Grant No. 2
C02027 00241 ∂05-Nov-83 1633 KJB@SRI-AI.ARPA Advisory Panel
C02028 00242 ∂06-Nov-83 0228 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #48
C02054 00243 ∂07-Nov-83 0228 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #49
C02073 00244 ∂07-Nov-83 0920 LAWS@SRI-AI.ARPA AIList Digest V1 #91
C02101 00245 ∂07-Nov-83 1027 KONOLIGE@SRI-AI.ARPA Dissertation
C02103 00246 ∂07-Nov-83 1030 EMMA@SRI-AI.ARPA recycling
C02104 00247 ∂07-Nov-83 1033 @SU-SCORE.ARPA:EENGELMORE@SUMEX-AIM.ARPA Request from China
C02106 00248 ∂07-Nov-83 1507 LAWS@SRI-AI.ARPA AIList Digest V1 #92
C02127 00249 ∂07-Nov-83 1512 JF@SU-SCORE.ARPA meeting, november 21 at stanford
C02135 00250 ∂07-Nov-83 1744 JF@SU-SCORE.ARPA mailing list
C02136 00251 ∂07-Nov-83 1831 ALMOG@SRI-AI.ARPA reminder on why context wont go away
C02139 00252 ∂07-Nov-83 2011 LAWS@SRI-AI.ARPA AIList Digest V1 #93
C02163 00253 ∂07-Nov-83 2245 @SU-SCORE.ARPA:YM@SU-AI Student Committee Members - 83/84
C02169 00254 ∂08-Nov-83 0927 LB@SRI-AI.ARPA MEETING 11/10 - CSLI Building Options
C02170 00255 ∂08-Nov-83 1101 ULLMAN@SU-SCORE.ARPA computer policy
C02172 00256 ∂08-Nov-83 1908 @SRI-AI.ARPA:desRivieres.PA@PARC-MAXC.ARPA CSLI Activities for Thursday Nov. 10th
C02175 00257 ∂09-Nov-83 0228 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #50
C02198 00258 ∂09-Nov-83 1532 GOLUB@SU-SCORE.ARPA Lunch on Tuesday, Nov 15
C02199 00259 ∂09-Nov-83 1618 @SU-SCORE.ARPA:YM@SU-AI Some points for thoughts and discussion for the Town Meeting:
C02201 00260 ∂09-Nov-83 2344 LAWS@SRI-AI.ARPA AIList Digest V1 #95
C02229 00261 ∂10-Nov-83 0116 DKANERVA@SRI-AI.ARPA Newsletter No. 8, November 10, 1983
C02274 00262 ∂10-Nov-83 0230 LAWS@SRI-AI.ARPA AIList Digest V1 #94
C02299 00263 ∂10-Nov-83 0448 REGES@SU-SCORE.ARPA Charge limiting of student accounts
C02304 00264 ∂10-Nov-83 0944 @SRI-AI.ARPA:BRESNAN.PA@PARC-MAXC.ARPA Re: Newsletter No. 8, November 10, 1983
C02306 00265 ∂10-Nov-83 1058 JF@SU-SCORE.ARPA abstract for G. Kuper's talk
C02309 00266 ∂10-Nov-83 1315 BMACKEN@SRI-AI.ARPA Transportation for Fodor and Partee
C02310 00267 ∂10-Nov-83 1437 KONOLIGE@SRI-AI.ARPA Thesis orals
C02311 00268 ∂10-Nov-83 1447 ELYSE@SU-SCORE.ARPA NSF split up
C02313 00269 ∂10-Nov-83 1553 BRODER@SU-SCORE.ARPA Next AFLB talk(s)
C02318 00270 ∂10-Nov-83 1649 GOLUB@SU-SCORE.ARPA Meeting with Bower
C02319 ENDMK
C⊗;
∂18-Aug-83 2138 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #20
Received: from SU-SCORE by SU-AI with TCP/SMTP; 18 Aug 83 21:30:05 PDT
Date: Thursday, August 18, 1983 8:23PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #20
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Friday, 19 Aug 1983 Volume 1 : Issue 20
Today's Topics:
Implementations - Not
----------------------------------------------------------------------
Date: Tue 16 Aug 83 23:16:49-PDT
From: Pereira@SRI-AI
Subject: Not Is Not Not
The predicate defined by
ugh(X) :- X, !, fail.
ugh(X).
( a.k.a. 'not' in some circles, a.k.a. \+ to DEC-20 Prolog/C-Prolog
users ) is NOT negation for two basic reasons:
1. The problem noted by Sanjai in the last Digest, which is well
known ( well, may be, well known in the Prolog underground... ).
Essentially, ugh (\+, not) should not be allowed to instantiate
variables in its argument.
2. Even if 1. didn't apply, it would still not be negation, but
rather "finite-failure non-provability", discussed in an excellent
theoretical paper by Lassez et al. at last week's IJCAI, and
previously analyzed by Apt and van Emden ( Journal of the ACM
vol. 29, no. 3, July 1982 ) and by Clark ( in Logic and Databases,
Gallaire and Minker eds., Plenum Press, NY, 1978 ).
Assuming we want negation as nonprovability in a certain context,
there is a conceptually simple cure to problem 1: delay \+ goals until
the goal is ground. If the goal never becomes ground, succeed printing
"solution if non-ground \+ goals"). As far as I know, this idea was
first suggested by Colmerauer and is implemented in Lee Naish's
MU-Prolog ( from Melbourne University ). There are efficiency problems
with his implementation, however, that have to do with the need to
scan the \+ goal over and over again to check it is fully ground
each time a variable in it is instantiated to some nonvariable term.
Given that DEC-20 Prolog and C-Prolog do not have the delaying
mechanism, Richard O'Keefe has implemented a version of \+ ( called
'not', sigh... ) that at least checks that all goals given to the
non-provability predicate are ground, and complains otherwise.
This helps spotting those illegitimate uses of non-provability.
------------------------------
Date: Wednesday, 17-Aug-83 20:09:26-BST
From: Richard HPS (on ERCC DEC-10) <OKeefe.R.A.@EDXA>
Subject: Negation Problem And Solution
As I wrote the file in question ( Not.Hlp ), I am replying to the
following message ( lines beginning with @ ).
@ Date: Monday, 15 Aug 1983 16:41-PDT
@ From: Narain@Rand-Unix
@ Subject: Problem With "Not"
@
@ In browsing through the Prolog utilities today I came across
@ one that advised using \+ instead of not ( or something )
@ and noticed the following example:
@
@ bachelor(X):-not(married(X)),male(X).
It would have been helpful to state which. In fact it was not a
utility(Not.Pl) but the help file (Not.Hlp), which gave me some
trouble finding it again. Yes, I did recommend using \+ instead
of not. There is a very simple and very good reason for that.
Because people kept complaining that 'not'/1 was an "incorrect"
implementation of negation, David Warren changed the symbol for
it to \+ ( which is as close as you can get to |-/- without
rewriting the tokeniser ), standing for "is NOW unprovable".
'\+'/1 is part of the Dec-10 Prolog system, 'not'/1 is NOT.
'not'/1 is defined in the utility file Invoca.Pl, and if you
don't happen to have loaded that file, your program won't do
what you expect.
Please quote me correctly. My example was:
bachelor(X) :- \+married(X), male(X).
@ If we add:
@
@ married(a).
@ married(b).
@ male(c).
@
@ then :-bachelor(Y). fails. This is what the example said or
implied.
That isn't what I said, but it is what I implied. By the way, even
if :- bachelor(Y). were to succeed, it wouldn't tell you who was a
bachelor. You would have to use ?- bachelor(Y) for that ( in DEC-10
C , EMAS, & Perq Prolog ).
@ But if we rewrite the rule as:
@
@ bachelor(X):-male(X),not(married(X)).
@
@ then :-bachelor(Y). succeeds with Y=c.
If it tells you what Y got bound to, it isn't Dec-10 Prolog or C
Prolog. There IS a Prolog system where this works both ways round,
so it is quite important to say which Prolog you are talking about.
If it succeeds, either you have the naive not/1 built into your
Prolog or you have Invoca.Pl or Not.Pl loaded. I'm not picking on
this writer, really I'm not. I'm just taking the opportunity to
point out that Dec-10 Prolog does not include a predicate not/1.
This had been reported in this Digest as a bug by someone who
didn't get ( or read ? ) the current manual.
@ I think this is serious since the order of literals should not
@ affect what the program computes, only its efficiency if at all.
Come off it! Consider the following:
p(X) :- var(X), X=a.
p :- read(X), write(X).
It is explicitly stated in Clocksin and Mellish, and in the Dec-10
Prolog manual, that the Prolog evaluation strategy is top to bottom
for clauses and >>left to right<< for goals. Goal order is very
definitely part of the language definition. It isn't just a matter
of "efficiency" either. There are some problems where one goal
ordering works just fine, and others fail to terminate. Agreed,
a pure logic-based language would not have this problem. ( Yes,
though I'm defending Dec-10 Prolog, I do agree that it is a problem
when you are learning the language. After a while you get used
to thinking about data-flow and it stops bothering you. )
@ The problem arises since at runtime the argument to "not"
@ contains a variable. This changes the meaning of not(X) to:
@ there exists a Y such that not(married(Y)). In which case Y
@ may be bound to any element in the set:
@ Herbrand-Universe for the program - {Y such that married(Y)
@ be derivable} a potentially infinite set.
NO!! \+p(X) is a negation, right? And variables on the right hand
side of the clause arrow are implicitly EXISTENTIALLY quantified,
aren't they? And (not exists X|p(X)) is (forall X|not p(X))
according to the logic texts. So the meaning of \+p(X), when X is
a variable, is:
is it true that there is NO substitution
for X which makes p(X) true?
What the Dec-10 system does in this case ( I.e. check whether there
is an X such that married(X)) is CORRECT, it is just surprising if
you don't think about the effect the negation has on that
quantifier. My interpretation of the negation problem is NOT
that Dec-10 Prolog gets negation-with-variables wrong, BUT
- there are TWO possible readings ( forall not, exists not )
- some people expect one reading, some the other
- Dec-10 Prolog only supplies ONE ( which I claim is correct )
- people wanting the other don't realise they can't have it
Negation as failure has a lot of nice properties, see Llyod et al in
the latest IJCAI Proceedings. However, it ONLY has these properties
when the negated goal is GROUND. ( The Dec-10 treatment of \+p(X)
can be very useful on occasion, but there is no known theory
for it. )
There are three solutions to this problem.
1. Always write generators (things that bind X) before tests (such as
\+p(X)). Rely on your own skill to always get it right, once you
have worked out just what Dec-10 Prolog, C Prolog, PDP-11 Prolog,
Prolog-X, Expert Systems Ltd Prolog, PopLog, ... do. A rule of
thumb is to remember that negation never binds anything.
2. As 1, but use the utility Not.Pl to check at run-time that your
negated goals are in fact ground. That's what it's there for.
3. Use a different Prolog system, which will suspend a negated goal
until it is ground, so that even if the generator comes after the
test it will run first.
In the long run, method 3 is the one to go for. There is a coroutine
package available for the Dec-10, which can do this. It runs at about
half the speed of the normal Dec-10 interpreter. Better still, there
is a Prolog system called MU Prolog which was written at the University
of Melbourne, Australia, and is distributed by them, which has
coroutining built in. The bachelor example works just fine either
way around in MU Prolog. MU Prolog is not quite in the Edinburgh
tradition. They deliberately decided to go for clarity and closeness
to logic rather than efficiency ( E.g. they haven't got a compiler
and don't care ), but their interpreter seems to be fast enough. It
is written in C, runs on a variety of machines, and costs ( I think )
$100 Australian. It also has ( or will soon have ) an efficient
extensible hashing scheme for accessing external data, which is
something almost all other Prologs ( including Dec-10 ) lack.
This message may look as though I'm holding two incompatible
posistions. They aren't really. Position 1 is
- ordering IS of great importance in most Prologs.
There is nothing special in negation wanting a particular
order. Just consider what would happen if you moved the
cuts...
- if you want efficiency, go for Dec-10 Prolog.
Position 2 is
- ordering should NOT be important in a logic programming language.
- MU Prolog comes closer to being a logic programming language than
Dec-10 Prolog does, while remaining usable for real programs.
Here, by the way, is the file in question, in case you haven't got it.
The utility it describes is available in the <PROLOG> directory as
described in an earlier volume of this Digest. I hope this message is
some help, and in case it gives offence, I apologise in advance.
File: Util:Not.Hlp Author: R.A.O'Keefe Updated: 12 July 1983
#source.
The simple-minded not/1 lives in Util:Invoca.Pl.
Whenever you could use it, you are strongly recommended to use \+ .
The suspicious not/1 lives in Util:Not.Pl.
#purpose.
The simple-minded not/1 was for compatibility with a long-dead version
of Dec-10 Prolog. It has been retained because some other Prologs use
it. However, it is always better to use \+ in Dec-10 Prolog, and it
is no worse in C Prolog.
There are problems with negated goals containing universally
quantified variables. For example, if you write
bachelor(X) :- \+married(X), male(X).
you will not be able to enumerate bachelors. To help you detect such
errors, there is a suspicious version of not/1 which will report any
negated goals containing universally quantified variables.
#commands.
The source only defines one public predicate: not/1.
If it detects an error, not/1 will switch on tracing and enter a
break.
not/1 understands the existential quantifier ↑ . See the description
of bagof and setof in the Prolog manual to find out what that means.
------------------------------
End of PROLOG Digest
********************
∂19-Aug-83 0741 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #21
Received: from SU-SCORE by SU-AI with TCP/SMTP; 19 Aug 83 07:41:01 PDT
Date: Friday, August 19, 1983 12:29AM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #21
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Friday, 19 Aug 1983 Volume 1 : Issue 21
Today's Topics:
Opinion - Prologs and Prologs,
Implementations - FOOLOG & Interpreter in Lisp
----------------------------------------------------------------------
Date: Thu 18 Aug 83 20:00:36-PDT
From: Pereira@SRI-AI
Subject: There are Prologs and Prologs ...
In the July issue of SIGART an article by Richard Wallace describes
PiL, yet another Prolog in Lisp. The author claims that his
interpreter shows that "it is easy to extend Lisp to do what Prolog
does."
It is a useful pedagogical exercise for Lisp users interested in
logic programming to look at a simple, clean implementation of a
subset of Prolog in Lisp. A particularly illuminating
implementation and discussion is given in "Structure and
Implementation of Computer Programs", a set of MIT lecture notes
by Abelson and Sussman.
However, such simple interpreters ( even the Abelson and Sussman one
which is far better than PiL ) are not a sufficient basis for the
claim that "it is easy extend Lisp to do what Prolog does." What
Prolog "does" is not just to make certain deductions in a certain
order, but also MAKE THEM VERY FAST. Unfortunately, ALL Prologs
in Lisp I know of fail in this crucial aspect ( by factors between
30 and 1000 ).
Why is speed such a crucial aspect of Prolog ( or of Lisp, for that
matter )? First, because the development of complex experimental
programs requires MANY, MANY experiments, which just could not be
done if the systems were, say, 100 times slower than they are.
Second, because a Prolog ( Lisp ) system needs to be written
mostly in Prolog ( Lisp ) to support the extensibility that is a
central aspect of modern interactive computing environments.
The following paraphrase of Wallace's claim shows its absurdity:
"[LiA ( Lisp in APL ) shows] that is easy to extend APL to do
what Lisp does." Really? All of what Maclisp does? All of what
ZetaLisp does?
Lisp and Prolog are different if related languages. Both have their
supporters. Both have strengths and ( serious ) weaknesses. Both
can be implemented with comparable efficiency. It is educational
to to look both at (sub)Prologs in Lisp and (sub)Lisps in Prolog.
Let's not claim discoveries of philosopher's stones.
Fernando Pereira
AI Center
SRI International
------------------------------
Date: Wed, 17 Aug 1983 10:20 EDT
From: Ken%MIT-OZ@MIT-MC
Subject: FOOLOG Prolog
Here is a small Prolog ( FOOLOG = First Order Oriented LOGic )
written in Maclisp. It includes the evaluable predicates CALL,
CUT, and BAGOF. I will probably permanently damage my reputation
as a MacLisp programmer by showing it, but as an attempt to cut
the hedge, I can say that I wanted to see how small one could
make a Prolog while maintaining efficiency ( approx 2 pages; 75%
of the speed of the Dec-10 Prolog interpreter ). It is actually
possible to squeeze Prolog into 16 lines. If you are interested
in that one and in FOOLOG, I have a ( very ) brief report describing
them that I can send you. Also, I'm glad to answer any questions
about FOOLOG. For me, the best is if you send messages by Snail Mail,
since I do not have a net connection. If that is uncomfortable, you
can also send messages via Ken Kahn, who forwards them.
My address is:
Martin Nilsson
UPMAIL
Computing Science Department
Box 2059
S-750 02 UPPSALA, Sweden
---------- Here is a FOOLOG sample run:
(load 'foolog) ; Lower case is user type-in
; Loading DEFMAX 9844442.
(progn (defpred member ; Definition of MEMBER predicate
((member ?x (?x . ?l)))
((member ?x (?y . ?l)) (member ?x ?l)))
(defpred cannot-prove ; and CANNOT-PROVE predicate
((cannot-prove ?goal) (call ?goal) (cut) (nil))
((cannot-prove ?goal)))
'ok)
OK
(prove (member ?elem (1 2 3)) ; Find elements of the list
(writeln (?elem is an element))))
(1. IS AN ELEMENT)
MORE? t ; Find the next solution
(2. IS AN ELEMENT)
MORE? nil ; This is enough
(TOP)
(prove (cannot-prove (= 1 2)) ; The two cannot-prove cases
MORE? t
NIL
(prove (cannot-prove (= 1 1))
NIL
---------- And here is the source code:
; FOOLOG Interpreter (c) Martin Nilsson UPMAIL 1983-06-12
(declare (special *inf* *e* *v* *topfun* *n* *fh* *forward*)
(special *bagof-env* *bagof-list*))
(defmacro defknas (fun args &rest body)
`(defun ,fun macro (l)
(cons 'progn (sublis (mapcar 'cons ',args (cdr l))
',body))))
; ---------- Interpreter
(setq *e* nil *fh* nil *n* nil *inf* 0
*forward* (munkam (logior 16. (logand (maknum 0) -16.))))
(defknas imm (m x) (cxr x m))
(defknas setimm (m x v) (rplacx x m v))
(defknas makrecord (n)
(loop with r = (makhunk n) and c for i from 1 to (- n 2) do
(setq c (cons nil nil))
(setimm r i (rplacd c c)) finally (return r)))
(defknas transfer (x y)
(setq x (prog1 (imm x 0) (setq y (setimm x 0 y)))))
(defknas allocate nil
(cond (*fh* (transfer *fh* *n*) (setimm *n* 7 nil))
((setq *n* (setimm (makrecord 8) 0 *n*)))))
(defknas deallocate (on)
(loop until (eq *n* on) do (transfer *n* *fh*)))
(defknas reset (e n) (unbind e) (deallocate n) nil)
(defknas ult (m x)
(cond ((or (atom x) (null (eq (car x) '/?))) x)
((< (cadr x) 7)
(desetq (m . x) (final (imm m (cadr x)))) x)
((loop initially (setq x (cadr x)) until (< x 7) do
(setq x (- x 6)
m (or (imm m 7)
(imm (setimm m 7 (allocate)) 7)))
finally (desetq (m . x) (final (imm m x)))
(return x)))))
(defknas unbind (oe)
(loop with x until (eq *e* oe) do
(setq x (car *e*)) (rplaca x nil) (rplacd x x) (pop *e*)))
(defknas bind (x y n)
(cond (n (push x *e*) (rplacd x (cons n y)))
(t (push x *e*) (rplacd x y) (rplaca x *forward*))))
(lap-a-list '((lap final subr) (hrrzi 1 @ 0 (1)) (popj p) nil))
; (defknas final (x) (cdr (memq nil x))) ; equivalent
(defknas catch-cut (v e)
(and (null (and (eq (car v) 'cut) (eq (cdr v) e))) v)))
(defun prove fexpr (gs)
(reset nil nil)
(seek (list (allocate)) (list (car (convq gs nil)))))
(defun seek (e c)
(loop while (and c (null (car c))) do (pop e) (pop c))
(cond ((null c) (funcall *topfun*))
((atom (car c)) (funcall (car c) e (cdr c)))
((loop with rest = (cons (cdar c) (cdr c)) and
oe = *e* and on = *n* and e1 = (allocate)
for a in (symeval (caaar c)) do
(and (unify e1 (cdar a) (car e) (cdaar c))
(setq inf* (1+ *inf*)
*v* (seek (cons e1 e)
(cons (cdr a) rest)))
(return (catch-cut *v* e1)))
(unbind oe)
finally (deallocate on)))))
(defun unify (m x n y)
(loop do
(cond ((and (eq (ult m x) (ult n y)) (eq m n)) (return t))
((null m) (return (bind x y n)))
((null n) (return (bind y x m)))
((or (atom x) (atom y)) (return (equal x y)))
((null (unify m (pop x) n (pop y))) (return nil)))))
; ---------- Evaluable Predicates
(defun inst (m x)
(cond ((let ((y x))
(or (atom (ult m x)) (and (null m) (setq x y)))) x)
((cons (inst m (car x)) (inst m (cdr x))))))
(defun lisp (e c)
(let ((n (pop e)) (oe *e*) (on *n*))
(or (and (unify n '(? 2) (allocate) (eval (inst n '(? 1))))
(seek e c))
(reset oe on))))
(defun cut (e c)
(let ((on (cadr e))) (or (seek (cdr e) c) (cons 'cut on))))
(defun call (e c)
(let ((m (car e)) (x '(? 1)))
(seek e (cons (list (cons (ult m x) '(? 2))) c))))
(defun bagof-topfun nil
(push (inst *bagof-env* '(? 1)) *bagof-list*) nil)
(defun bagof (e c)
(let* ((oe *e*) (on *n*) (*bagof-list* nil)
(*bagof-env* (car e)))
(let ((*topfun* 'bagof-topfun)) (seek e '(((call (? 2))))))
(or (and (unify (pop e) '(? 3) (allocate) *bagof-list*)
(seek e c))
(reset oe on))))
; ---------- Utilities
(defun timer fexpr (x)
(let* ((*rset nil) (*inf* 0) (x (list (car (convq x nil))))
(t1 (prog2 (gc) (runtime) (reset nil nil)
(seek (list (allocate)) x)))
(t1 (- (runtime) t1)))
(list (// (* *inf* 1000000.) t1) 'LIPS (// t1 1000.)
'MS *inf* 'INF)))
(eval-when (compile eval load)
(defun convq (t0 l0)
(cond ((pairp t0) (let* (((t1 . l1) (convq (car t0) l0))
((t2 . l2) (convq (cdr t0) l1)))
(cons (cons t1 t2) l2)))
((null (and (symbolp t0) (eq (getchar t0 1) '/?)))
(cons t0 l0))
((memq t0 l0)
(cons (cons '/? (cons (length (memq t0 l0))
t0)) l0))
((convq t0 (cons t0 l0))))))
(defmacro defpred (pred &rest body)
`(setq ,pred ',(loop for clause in body
collect (car (convq clause nil)))))
(defpred true ((true)))
(defpred = ((= ?x ?x)))
(defpred lisp ((lisp ?x ?y) . lisp))
(defpred cut ((cut) . cut))
(defpred call ((call (?x . ?y)) . call))
(defpred bagof ((bagof ?x ?y ?z) . bagof))
(defpred writeln
((writeln ?x) (lisp (progn (princ '?x) (terpri)) ?y)))
(setq *topfun*
'(lambda nil (princ "MORE? ")
(and (null (read)) '(top))))
------------------------------
Date: Wed, 17 Aug 1983 10:14 EDT
From: Ken%MIT-OZ@MIT-MC
Subject: A Pure Prolog Written In Pure Lisp
;; The following is a tiny Prolog interpreter in MacLisp
;; written by Ken Kahn.
;; It was inspired by other tiny Lisp-based Prologs of
;; Par Emanuelson and Martin Nilsson
;; There are no side-effects in anywhere in the implementation
;; Though it is very slow of course.
(defun Prolog (database) ;; a top-level loop for Prolog
(prove (list (rename-variables (read) '(0)))
;; read a goal to prove
'((bottom-of-environment)) database 1)
(prolog database))
(defun prove (list-of-goals environment database level)
;; proves the conjunction of the list-of-goals
;; in the current environment
(cond ((null list-of-goals)
;; succeeded since there are no goals
(print-bindings environment environment)
;; the user answers "y" or "n" to "More?"
(not (y-or-n-p "More?")))
(t (try-each database database
(rest list-of-goals) (first list-of-goals)
environment level))))
(defun try-each (database-left database goals-left goal
environment level)
(cond ((null database-left)
()) ;; fail since nothing left in database
(t (let ((assertion
;; level is used to uniquely rename variables
(rename-variables (first database-left)
(list level))))
(let ((new-environment
(unify goal (first assertion) environment)))
(cond ((null new-environment) ;; failed to unify
(try-each (rest database-left)
database
goals-left
goal
environment level))
((prove (append (rest assertion) goals-left)
new-environment
database
(add1 level)))
(t (try-each (rest database-left)
database
goals-left
goal
environment
level))))))))
(defun unify (x y environment)
(let ((x (value x environment))
(y (value y environment)))
(cond ((variable-p x) (cons (list x y) environment))
((variable-p y) (cons (list y x) environment))
((or (atom x) (atom y))
(and (equal x y) environment))
(t (let ((new-environment
(unify (first x) (first y) environment)))
(and new-environment
(unify (rest x) (rest y)
new-environment)))))))
(defun value (x environment)
(cond ((variable-p x)
(let ((binding (assoc x environment)))
(cond ((null binding) x)
(t (value (second binding) environment)))))
(t x)))
(defun variable-p (x) ;; a variable is a list beginning with "?"
(and (listp x) (eq (first x) '?)))
(defun rename-variables (term list-of-level)
(cond ((variable-p term) (append term list-of-level))
((atom term) term)
(t (cons (rename-variables (first term)
list-of-level)
(rename-variables (rest term)
list-of-level)))))
(defun print-bindings (environment-left environment)
(cond ((rest environment-left)
(cond ((zerop
(third (first (first environment-left))))
(print
(second (first (first environment-left))))
(princ " = ")
(prin1 (value (first (first environment-left))
environment))))
(print-bindings (rest environment-left) environment))))
;; a sample database:
(setq db '(((father jack ken))
((father jack karen))
((grandparent (? grandparent) (? grandchild))
(parent (? grandparent) (? parent))
(parent (? parent) (? grandchild)))
((mother el ken))
((mother cele jack))
((parent (? parent) (? child))
(mother (? parent) (? child)))
((parent (? parent) (? child))
(father (? parent) (? child)))))
;; the following are utilities
(defun first (x) (car x))
(defun rest (x) (cdr x))
(defun second (x) (cadr x))
(defun third (x) (caddr x))
------------------------------
End of PROLOG Digest
********************
∂19-Aug-83 1551 rita@su-score [Rita Leibovitz <RITA@Score>: Accepted Our Offer Ph.D./MS]
Received: from SU-SHASTA by SU-AI with PUP; 19-Aug-83 15:50 PDT
Received: from SU-Score by Shasta with TCP; Fri Aug 19 15:41:32 1983
Date: Fri 19 Aug 83 15:40:42-PDT
From: Rita Leibovitz <RITA@SU-SCORE.ARPA>
Subject: [Rita Leibovitz <RITA@Score>: Accepted Our Offer Ph.D./MS]
To: admissions@SU-SHASTA.ARPA, yearwood@SU-SCORE.ARPA, atkinson@SU-SCORE.ARPA
Stanford-Phone: (415) 497-4365
Add the name of Barbara Brawn to the CSMS, making the total 45.
---------------
Received: from Shasta by Score with Pup; Mon 27 Jun 83 10:05:52-PDT
Received: from Score by Shasta with PUP; Mon, 27 Jun 83 10:05 PDT
Date: Mon 27 Jun 83 10:05:26-PDT
From: Rita Leibovitz <RITA@Score>
Subject: Accepted Our Offer Ph.D./MS
To: admissions@Shasta
cc: yearwood@Score
Stanford-Phone: (415) 497-4365
The following two lists are the Ph.D. and CSMS applicants who have accepted
our offer, as of 6/27/83.
6/27/83 PHD APPLICANTS WHO HAVE ACCEPTED OUR OFFER (21)
MALE = 17 FEMALE = 4
LAST FIRST SEX MINORITY INT1 INT2
---- ----- --- -------- ---- ----
ABADI MARTIN M MTC AI
BLATT MIRIAM F VLSI PSL
CARPENTER CLYDE M PSL OS
CASLEY ROSS M MTC PSL
DAVIS HELEN F DCS VLSI
HADDAD RAMSEY M AI UN
HALL KEITH M UN
KELLS KATHLEEN F AI
KENT MARK M NA OR
LAMPING JOHN M AI PSL
LARRABEE TRACY F PSL AI
MC CALL MICHAEL M PSL CL
MILLS MICHAEL M AI CL
PALLAS JOSEPH M PSL OS
ROY SHAIBAL M VLSI DCS
SANKAR SRIRAM M PSL OS
SCHAFFER ALEJANDRO M HISPANIC AA CM
SHIEBER STUART M CL AI
SUBRAMANIAN ASHOK M AI NETWORKS
SWAMI ARUN NARASIMHA M PSL MTC
TJIANG WENG KIANG M PSL OS
8/19/83 CSMS APPLICANTS WHO HAVE ACCEPTED OUR OFFER (45)
MALE = 37 FEMALE = 8 DEFERRED = 3
LAST FIRST SEX COTERM DEPT. MINORITY
---- ----- --- ------ ----- --------
ANDERSON ALLAN M
ANDERSON STEVEN M
BENNETT DON M
BERNSTEIN DAVID M
BION JOEL M PHILOSOPHY (DEFER UNTIL 9/84)
BRAWN BARBARA F
CAMPOS ALVARO M
CHAI SUN-KI M ASIAN
CHEHIRE WADIH M
CHEN GORDON M ASIAN
COCHRAN KIMBERLY F
COLE ROBERT M
COTTON TODD M MATH
DICKEY CLEMENT M
ETHERINGTON RICHARD M
GARBAGNATI FRANCESCO M
GENTILE CLAUDIO M
GOLDSTEIN MARK M
HARRIS PETER M
HECKERMAN DAVID M
HUGGINS KATHERINE F
JAI HOKIMI BASSIM M
JONSSON BENGT M
JULIAO JORGE M
LEO YIH-SHEH M
LEWINSON JAMES M MATH
LOEWENSTEIN MAX M
MARKS STUART M E.E. ASIAN (DEFER 4/84)
MULLER ERIC M
PERKINS ROBERT M CHEMISTRY
PERNICI BARBARA F
PONCELEON DULCE F
PORAT RONALD M
PROUDIAN DEREK M ENGLISH/COG.SCI
REUS EDWARD M
SCOGGINS JOHN M MATH. SCIENCE
SCOTT KIMBERLY F
VELASCO ROBERTO M
VERDONK BRIGITTE F
WENOCUR MICHAEL M
WICKSTROM PAUL M
WU LI-MEI F
WU NORBERT M ELEC. ENGIN. ASIAN (DEFER 9/84)
YOUNG KARL M
YOUNG PAUL M
-------
-------
∂19-Aug-83 1927 LAWS@SRI-AI.ARPA AIList Digest V1 #43
Received: from SRI-AI by SU-AI with TCP/SMTP; 19 Aug 83 19:26:11 PDT
Date: Friday, August 19, 1983 5:26PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #43
To: AIList@SRI-AI
AIList Digest Saturday, 20 Aug 1983 Volume 1 : Issue 43
Today's Topics:
Administrivia - Request for Archives,
Bindings - J. Pearl,
Programming Languages - Loglisp & LISP CAI Packages,
Automatic Translation - Lisp to Lisp,
Knowledge Representation,
Bibliographies - Sources & AI Journals
----------------------------------------------------------------------
Date: Thu 18 Aug 83 13:19:30-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Archives
I would like to hear from systems people maintaining AIList archives
at their sites. Please msg AIList-Request@SRI-AI if you have an
online archive that is publicly available and likely to be available
under the same file name(s) for the forseeable future. Send any
special instructions needed (beyond anonymous FTP). I will then make
the information available to the list.
-- Ken Laws
------------------------------
Date: Thu, 18 Aug 83 13:50:16 PDT
From: Judea Pearl <f.judea@UCLA-LOCUS>
Subject: change of address
Effective September 1, 1983 and until March 1, 1984 Judea Pearl's
address will be :
Judea Pearl
c/o Faculty of Management
University of Tel Aviv
Ramat Aviv, ISRAEL
Dr. Pearl will be returning to UCLA at that time.
------------------------------
Date: Wednesday, 17 Aug 1983 17:52-PDT
From: narain@rand-unix
Subject: Information on Loglisp
You can get Loglisp (language or reports) by writing to J.A. Robinson
or E.E. Sibert at:
C.I.S.
313 Link Hall
Syracuse University
Syracuse, NY 13210
A paper on LOGLISP also appeared in "Logic Programming" eds. Clark and
Tarnlund, Academic Press 1982.
-- Sanjai
------------------------------
Date: 17 Aug 83 15:19:44-PDT (Wed)
From: decvax!ittvax!dcdwest!benson @ Ucb-Vax
Subject: LISP CAI Packages
Article-I.D.: dcdwest.214
Is there a computer-assisted instructional package for LISP that runs
under 4.1 bsd ? I would appreciate any information available and will
summarize what I learn ( about the package) in net.lang.lisp.
Peter Benson decvax!ittvax!dcdwest!benson
------------------------------
Date: 17-AUG-1983 19:27
From: SHRAGER%CMU-PSY-A@CMU-CS-PT
Subject: Lisp to Lisp translation again
I'm glad that I didn't have to start this dicussion up this time.
Anyhow, here's a suggestion that I think should be implemented but
which requires a great deal of Lisp community cooperation. (Oh
dear...perhaps it's dead already!)
Probably the most intracompatible language around (next to TRAC) is
APL. I've had a great deal of success moving APL workspaces from one
implementation to another with a minumum of effort. Now, part of this
has to do with the fact that APL's primatve set can't be extended
easily but if you think about it, the question of exactly how do you
get all the stuff in a workspace from one machine to the other isn't
an easy one to answer. The special character set makes each machine's
representation a little different and, of course, trying to send the
internal form would be right out!
The APL community solved this rather elegantly: they have a thing
called a "workspace interchange standard" which is in a canonical code
whose first 256 bytes are the atomic vector (character codes) for the
source machine, etc. The beauty of this canconical representation
isn't just that it exists, but rather that the translation to and from
this code is the RESPONSIBILITY OF THE LOCAL IMPLEMENTOR! That is,
for example, if I write a program in Franz and someone at Xerox wants
it, I run it through our local workspace outgoing translator which
puts it into the standard form and then I ship them that (presumably
messy) version. They have a compatible ingoing translator which takes
certain combinations of constructs and translates them to InterLisp.
Now, of course, this isn't all that easy. First we'd have to agree on
a standard but that's not so bad. Most of the difficulty in deciding
on a standard Lisp is taste and that has nothing to do with the form
of the standard since no human ever writes in it. Another difficulty
(here I am endebted to Ken Laws) is that many things have impure
semantics and so cannot be cleanly translated into another form --
take, for example, the spaghetti stack (please!). Anyhow, I never said
it would be easy but I don't think that it's all that difficult either
-- certainly it's easier than the automatic programming problem.
I'll bet this would make a very interesting dissertation for some
bright young Lisp hacker. But the difficult part isn't any particular
translator. Each is hand tailored by the implementors/supporters of a
particular lisp system. The difficult part is getting the Lisp world
to follow the example of a computing success, as, I think, the APL
world has shown workspace interchange to be.
------------------------------
Date: 18 Aug 83 15:31:18-PDT (Thu)
From: decvax!tektronix!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Knowledge Representation, Programming Styles
Article-I.D.: ssc-vax.437
Actually trees can be expressed as attribute-value pairs. Have had to
do that to get around certain %(&↑%$* OPS5 limitations, so it's
possible, but not pretty. However, many times your algebraic/tree
expressions/structures have duplicated components, in which case you
would like to join two nodes at lower levels. You then end up with a
directed structure only. (This is also a solution for multiple
inheritance problems.)
I'll refrain from flaming about traditional (including logic)
grammars. I'm tired of people insisting on a restricted view of
language that claims that grammar rules are the ultimate description
of syntax (semantics being irrelevant) and that idioms are irritating
special cases. I might note that we have basically solved the
language analysis problem (using a version of Berkeley's Phrase
Analysis that handles ambiguity) and are now working on building a
language learner to speed up the knowledge acquisition process, as
well as other interesting projects.
I don't recall a von Neumann bottleneck in AI programs, at least not
of the kind Backus was talking about. The main bottleneck seems to be
of a conceptual rather than a hardware nature. After all, production
systems are not inherently bottlenecked, but nobody really knows how
to make them run concurrently, or exactly what to do with the results
(I have some ideas though).
stan the lep hack
ssc-vax!sts (soon utah-cs)
------------------------------
Date: 16 Aug 83 10:43:54-PDT (Tue)
From: ihnp4!ihuxo!fcy @ Ucb-Vax
Subject: How does one obtain university technical reports?
Article-I.D.: ihuxo.276
I think the bibliographies being posted to the net are great. I'd
like to follow up on some of the references, but I don't know where to
obtain copies for many of them. Is there some standard protocol and
contact point for requesting copies of technical reports from
universities? Is there a service company somewhere from which one
could order such publications with limited distribution?
Curiously,
Fred Yankowski
Bell Labs Rm 6B-216
Naperville, IL
ihnp4!ihuxo!fcy
[I published all the addresses I know in V1 #8, May 22. Two that
might be of help are:
National Technical Information Service
5285 Port Royal Road
Springfield, Virginia 22161
University Microfilms
300 North Zeeb Road
Ann Arbor, MI 48106
You might be able to get ordering information for many sources
through your corporate or public library. You could also contact
LIBRARY@SCORE; I'm sure Richard Manuck would be willing to help.
If all else fails, put out a call for help through AIList. -- KIL]
------------------------------
Date: 17 Aug 83 1:14:51-PDT (Wed)
From: decvax!genrad!mit-eddie!gumby @ Ucb-Vax
Subject: Re: How does one obtain university technical reports?
Article-I.D.: mit-eddi.616
Bizarrely enough, MIT and Stanford AI memos were recently issued by
some company on MICROFILM (!) for some exorbitant price. This price
supposedly gives you all of them plus an introduction by Marvin
Minsky. They advertised in Scientific American a few months ago. I
guess this is a good deal for large institutions like Bell, but
smaller places are unlikely to have a microfilm (or was it fiche)
reader.
MIT AI TR's and memos can be obtained from Publications, MIT AI Lab,
8th floor, 545 Technology Square, Cambridge, MA 02139.
[See AI Magazine, Vol. 4, No. 1, Winter-Spring 1983, pp. 19-22, for
Marvin Minsky's "Introduction to the COMTEX Microfiche Edition of the
Early MIT Artificial Intelligence Memos". An ad on p. 18 offers the
set for $2450. -- KIL]
------------------------------
Date: 17 Aug 83 10:11:33-PDT (Wed)
From: harpo!eagle!mhuxt!mhuxi!mhuxa!ulysses!cbosgd!cbscd5!lvc @
Ucb-Vax
Subject: List of AI Journals
Article-I.D.: cbscd5.419
Here is the list of AI journals that I was able to put together from
the generous contributions of several readers. Sorry about the delay.
Most of the addresses, summary descriptions, and phone numbers for the
journals were obtained from "The Standard Periodical Directory"
published by Oxbridge Communications Inc. 183 Madison Avenue, Suite
1108 New York, NY 10016 (212) 689-8524. Other sources you may wish to
try are Ulrich's International Periodicals Directory, and Ayer
Directory of Publications. These three reference books should be
available in most libraries.
*************************
AI Journals and Magazines
*************************
------------------------------
AI Magazine
American Association for Artificial Intelligence
445 Burgess Drive
Menlo Park, CA 94025
(415) 328-3123
AAAI-OFFICE@SUMEX-AIM
Quarterly, $25/year, $15 Student, $100 Academic/Corporate
------------------------------
Artificial Intelligence
Elsevier Science Publishers B.V. (North-Holland)
P.O. Box 211
1000 AE Amsterdam, The Netherlands
About 8 issues/year, 880 Df. (approx. $352)
------------------------------
American Journal of Computational Linguistics
Donald E. Walker
SRI International
333 Ravenswood Avenue
Menlo Park, CA 94025
(415) 859-3071
Quarterly, individual ACL members $15/year, institutions $30.
------------------------------
Robotics Age
Robotics Publishing Corp.
174 Concord St., Peterborough NH 03458 (603) 924-7136
Technical articles related to design and implementation of
intelligent machine systems
Bimonthly, No price quoted
------------------------------
SIGART Newsletter
Association for Computing Machinery
11 W. 42nd St., 3rd fl.
New York NY 10036
(212) 869-7440
Artificial intelligence, news, report, abstracts, educa-
tional material, etc. Book reviews.
Bimonthly $12/year, $3/copy
------------------------------
Cognitive Science
Ablex Publishing Corp.
355 Chestnut St.
Norwood NJ 07648
(201) 767-8450
Articles devoted to the emerging fields of cognitive
psychology and artificial intelligence.
Quarterly $22/year
------------------------------
International Journal of Man Machine Studies
Academic Press Inc.
111 Fifth Avenue
New York NY 10013
(212) 741-4000
No description given.
Quarterly $26.50/year
------------------------------
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Computer Society
10662 Los Vaqueros Circles,
Los Alamitos CA 90720
(714) 821-8380
Technical papers dealing with advancements in artificial
machine intelligence
Bimonthly $70/year, $12/copy
------------------------------
Behavioral and Brain Sciences
Cambridge University Press
32 East 57th St.
New York NY 10022
(212) 688-8885
Scientific form of research in areas of psychology,
neuroscience, behavioral biology, and cognitive science,
continuing open peer commentary is published in each issue
Quarterly $95/year, $27/copy
------------------------------
Pattern Recognition
Pergamon Press Inc.
Maxwell House, Fairview Park
Elmsford NY 10523
(914) 592-7700
Official journal of the Pattern Recognition Society
Bimonthly $170/year, $29/copy
------------------------------
************************************
Other journals of possible interest.
************************************
------------------------------
Brain and Cognition
Academic Press
111 Fifth Avenue
New York NY 10003
(212) 741-6800
The latest research in the nonlinguistic aspects of neuro-
psychology.
Quarterly $45/year
------------------------------
Brain and Language
Academic Press, Journal Subscription
111 Fifth Avenue
New York NY 10003
(212) 741-6800
No description given.
Quarterly $30/year
------------------------------
Human Intelligence
P.O. Box 1163
Birmingham MI 48012
(313) 642-3104
Explores the research and application of ideas on human
intelligence.
Bimonthly newsletter - No price quoted.
------------------------------
Intelligence
Ablex Publishing Corp.
355 Chestnut St.
Norwood NJ 07648
(201) 767-8450
Original research, theoretical studies and review papers
contributing to understanding of intelligence.
Quarterly $20/year
------------------------------
Journal of the Assn. for the Study of Perception
P.O. Box 744
DeKalb IL 60115
No description given.
Semiannually $6/year
------------------------------
Computational Linguistics and Computer Languages
Humanities Press
Atlantic Highlands NJ 07716
(201) 872-1441
Articles deal with syntactic and semantic of [missing word]
languages relating to math and computer science, primarily
those which summarize, survey, and evaluate.
Semimonthly $46.50/year
------------------------------
Annual Review in Automatic Programming
Maxwell House, Fairview Park
Elmsford NY 10523
(914) 592-7700
A comprehensive treatment of some major topics selected
for their current importance.
Annual $57/year
------------------------------
Computer
IEEE Computer Society
10662 Los Vaqueros Circle
Los Alamitos, CA 90720
(714) 821-8380
Monthly, $6/copy, free with Computer Society Membership
------------------------------
Communications of the ACM
Association for Computing Machinery
11 West 42nd Street
New York, NY 10036
Monthly, $65/year, free with membership ($50, $15 student)
------------------------------
Journal of the ACM
Association for Computing Machinery
11 West 42nd Street
New York, NY 10036
Computer science, including some game theory,
search, foundations of AI
Quarterly, $10/year for members, $50 for nonmembers
------------------------------
Cognition
Associated Scientific Publishers b.v.
P.O. Box 211
1000 AE Amsterdam, The Netherlands
Theoretical and experimental studies of the mind, book reviews
Bimonthly, 140 Df./year (~ $56), 240 Df. institutional
------------------------------
Cognitive Psychology
Academic Press
111 Fifth Avenue
New York, NY 10003
Quarterly, $74 U.S., $87 elsewhere
------------------------------
Robotics Today
Robotics Today
One SME Drive
P.O. Box 930
Dearborn, MI 48121
Robotics in Manufacturing
Bimonthly, $36/year unless member of SME or RIA
------------------------------
Computer Vision, Graphics, and Image Processing
Academic Press
111 Fifth Avenue
New York, NY 10003
$260/year U.S. and Canada, $295 elsewhere
------------------------------
Speech Technology
Media Dimensions, Inc.
525 East 82nd Street
New York, NY 10028
(212) 680-6451
Man/machine voice communications
Quarterly, $50/year
------------------------------
*******************************
Names, but no addresses
*******************************
Magazines
--------
AISB Newsletter
Proceedings
←←←←←←←←←←
IJCAI International Joint Conference on AI
AAAI American Association for Artificial Intelligence
TINLAP Theoretical Issues in Natural Language Processing
ACL Association of Computational Linguistics
AIM AI in Medicine
MLW Machine Learning Workshop
CVPR Computer Vision and Pattern Recognition (formerly PRIP)
PR Pattern Recognition
IUW Image Understanding Workshop (DARPA)
T&A Trends and Applications (IEEE, NBS)
DADCM Workshop on Data Abstraction, Databases, and Conceptual Modeling
CogSci Cognitive Science Society
EAIC European AI Conference
Thanks again to all that contributed.
Larry Cipriani
cbosgd!cbscd5!lvc
------------------------------
End of AIList Digest
********************
∂21-Aug-83 0014 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #22
Received: from SU-SCORE by SU-AI with TCP/SMTP; 21 Aug 83 00:13:55 PDT
Date: Saturday, August 20, 1983 11:32AM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #22
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Sunday, 21 Aug 1983 Volume 1 : Issue 22
Today's Topics:
Implementations - Not
----------------------------------------------------------------------
Date: Friday, 19 Aug 1983 11:21-PDT
From: Narain@Rand-Unix
Subject: Reply To Reply To Not
Thank you very much for your informative replies to the "not"
question.
Firstly let me state that I have been a long time programmer in
Prolog and LOGLISP and greatly support that style of programming.
I perfectly agree that if one wants efficiency and also wants to
avoid the spurious behavior of "not" one must be careful that
the call to "not is" positioned after the calls to the generators.
It IS a constraint but one fairly easy to live with and does not
diminish the power of Prolog for most practical applications.
In fact I did anticipate that that would be the most reasonable
solution given our Prolog ( C-Prolog ).
One of the most compelling reasons to use Prolog is that it is based
upon a well understood theoretical framework. ( I certainly do rely
upon it to a considerable extent when developing my Prolog
programs ). If there are deviations from it in a practical
environment it is only benefitting to all that they be brought to
light.
I however wish to respond to one more point raised by Richard
O'Keefe. "not" with an argument containing a variable is NOT
interpreted properly by C-Prolog. Richard says:
>> NO!! \+p(X) is a negation, right? And variables on the right
>> hand side of the clause arrow are implicitly EXISTENTIALLY
>> aren't they? And (not exists X|p(X)) is (forall X|not p(X))
>> quantified, according to the logic texts. So the meaning of
>> \+p(X), when X is a variable, is:
>> is it true that there is NO substitution
>> for X which makes p(X) true?
The above reasoning is incorrect. Firstly, variables are
implicitly existentially quantified ONLY when they do not
appear in the head of the clause. It is easy to verify this.
Secondly "not exists X such that p(X)" is quite different
from "exists X such that not p(X)". The following example
shows how not(p(X)) when X is a variable has the second
interpretation and not the first as pointed out. When you
type a query:
:-p(X).
( which is a clause in which X does not appear on the
left hand side ), you are implicitly asking the
interpreter "is there an X such that p(X)?" So, when
you type:
:-not p(X)
X is still existentially quantified and so the correct
reading is:
"is there an X such that not p(X)?"
and NOT "is there no X such that p(X)?"
which is what the Prolog interpreter assumes anyway.
Sanjai Narain
Rand Corp.
------------------------------
End of PROLOG Digest
********************
∂21-Aug-83 1443 larson@Shasta Alan Borning on Computer Reliability and Nuclear War
Received: from SU-SHASTA by SU-AI with PUP; 21-Aug-83 14:43 PDT
Date: Sun, 21 Aug 83 14:44 PDT
From: John Larson <larson@Shasta>
Subject: Alan Borning on Computer Reliability and Nuclear War
To: Funding@Sail, su-bboards@Shasta
The Stanford Arms Control and Disarmament Forum &
Computer Professionals for Social Responsibility present:
COMPUTER RELIABILITY AND NUCLEAR WAR
A talk by
A l a n B o r n i n g
Professor of Computer Science at the University of Washington, and Director of
the Seattle Chapter of Computer Professionals for Social Responsibility.
Borning will discuss the computer dependence of the nuclear weapons systems of
both the US and USSR, and the resulting danger of accidental nuclear war. The
talk will also cover the implications of current nuclear strategies and the
issues of computer reliability in command, control, and communication systems.
This talk will be held
Thursday, August 25, at 7:30
History Corner (building 200) Room 2
∂22-Aug-83 1145 LAWS@SRI-AI.ARPA AIList Digest V1 #44
Received: from SRI-AI by SU-AI with TCP/SMTP; 22 Aug 83 11:41:46 PDT
Date: Monday, August 22, 1983 9:39AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #44
To: AIList@SRI-AI
AIList Digest Monday, 22 Aug 1983 Volume 1 : Issue 44
Today's Topics:
AI Architecture - Parallel Processor Request,
Computational Complexity - Maximum Speed,
Functional Programming,
Concurrency - Production Systems & Hardware,
Programming Languages - NETL
----------------------------------------------------------------------
Date: 18 Aug 83 17:30:43-PDT (Thu)
From: decvax!linus!philabs!sdcsvax!noscvax!revc @ Ucb-Vax
Subject: Looking for parallel processor systems
Article-I.D.: noscvax.182
We have been looking into systems to replace our current ANALOG
computers. They are the central component in a real time simulation
system. To date, the only system we've seen that looks like it might
do the job is the Zmob system being built at the Univ. of Md (Mark
Weiser).
I would appreciate it if you could supply me with pointers to other
systems that might support high speed, high quality, parallel
processing.
Note: most High Speed networks are just too slow and we can't justify
a Cray-1.
Bob Van Cleef
uucp : {decvax!ucbvax || philabs}!sdcsvax!nosc!revc arpa : revc@nosc
CompuServe : 71565,533
------------------------------
Date: 19 Aug 83 20:29:13-PDT (Fri)
From: decvax!tektronix!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: maximum speed
Article-I.D.: ssc-vax.445
Hmmm, I didn't know that addition of n numbers could be performed
simultaneously - ok then, constant time matrix multiplication, given
enough processors. I still haven't seen any hard data on limits to
speed because of communications problems. If it seems like there are
limits but you can't prove it, then maybe you haven't discovered the
cleverest way to do it yet...
stan the lep hack
ssc-vax!sts (soon utah-cs)
ps The space cost of constant or log time matrix mults is of course
ridiculous
pps Perhaps this should move to net.applic?
------------------------------
Date: Fri, 19 Aug 83 15:08:15 EDT
From: Paul Broome (CTAB) <broome@brl-bmd>
Subject: Re: Functional programming and AI
Stan,
Let me climb into my pulpit and respond to your FP/AI prod. I don't
think FP and AI are diametrically opposed. To refresh everyone's
memory here are some of your comments.
... Having worked with both AI and FP languages,
it seems to me that the two are diametrically
opposed to one another. The ultimate goal of functional
programming language research is to produce a language that
is as clean and free of side effects as possible; one whose
semantic definition fits on a single side of an 8 1/2 x 11
sheet of paper ...
Looking at Backus' Turing award lecture, I'd have to say that
cleanliness and freedom of side effects are two of Backus' goals but
certainly not succinctness of definition. In fact Backus says (CACM,
Aug. 78, p. 620), "Our intention is to provide FP systems with widely
useful and powerful primitive functions rather than weak ones that
could then be used to define useful ones."
Although FP has no side effects, Backus also talked about applicative
state systems (AST) with one top level change of state per
computation,i.e. one side effect. The world of expressions is a
nice, orderly one; the world of statements has all the mush. He's
trying to move the statement part out of the way.
I'd have to say one important part of the research in FP systems is to
define and examine functional forms (program forming operations) with
nice mathematical properties. A good way to incorporate (read
implement) a mathematical concept in a computer program is without
side effects. This side effect freeness is nice because it means that
a program is 'referentially transparent', i.e. it can be used without
concern about collision with internal names or memory locations AND
the program is dependable; it always does the same thing.
A second nice thing about applicative languages is that they are
appropriate for parallel execution. In a shared memory model of
computation (e.g. Ada) it's very difficult (NP-complete, see CACM, a
couple of months ago) to tell if there is collision between
processors, i.e. is a processor overwriting data that another
processor needs.
On the other hand, the goal of AI research (at least in the
AI language area) is to produce languages that can effectively
work with as tangled and complicated representations of
knowledge as possible. Languages for semantic nets, frames,
production systems, etc, all have this character.
I don't think that's the goal of AI research but I can't offer a
better one at the moment. (Sometimes it looks as if the goal is to
make money.)
Large, tangled structures can be handled in applicative systems but
not efficiently, at least I don't see how. If you view a database
update as a function mapping the pair (NewData, OldDatabase) into
NewDatabase you have to expect a new database as the returned value.
Conceptionally that's not a problem. However, operationally there
should just be a minor modification of the original database when
there is no sharing and suspended modification when the database is
being shared. There are limited transformations that can help but
there is much room for improvement.
An important point in all this is program transformation. As we build
bigger and smarter systems we widen the gap between the way we think
and the hardware. We need to write clear, easy to understand, and
large-chunked programs but transform them (within the same source
language) into possibly less clear, but more efficient programs.
Program transformation is much easier when there are no side effects.
Now between the Japanese 5th generation project (and the US
response) and the various projects to build non-vonNeumann
machines using FP, it looks to me like the seeds of a
controversy over the best way to do programming. Should we be
using FP languages or AI languages? We can't have it both ways,
right? Or can we?
A central issue is efficiency. The first FORTRAN compiler was viewed
with the same distrust that the public had about computers in general.
Early programmers didn't want to relinquish explicit management of
registers or whatever because they didn't think the compiler could do
as well as they. Later there was skepticism about garbage collection
and memory management. A multitude of sins is committed in the name
of (machine) efficiency at the expense of people efficiency. We
should concern ourselves more with WHAT objects are stored than with
HOW they are stored.
There's no doubt that applicative languages are applicable. The
Japanese (fortunately for them) are less affected by, as Dijkstra puts
it, "our antimathematical age." And they, unlike us, are willing to
sacrifice some short term goals for long term goals.
- Paul Broome
(broome@brl)
------------------------------
Date: 17 Aug 83 17:06:13-PDT (Wed)
From: decvax!tektronix!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: FP and AI - (nf)
Article-I.D.: ssc-vax.427
There *is* a powerful functional language underlying most AI programs
- Lisp! But it's never pure Lisp. The realization that got me to
thinking about this was the apparent necessity for list surgery,
sooner or later. rplaca and allied functions show up in the strangest
places, and seem to be crucial to the proper functioning of many AI
systems (consider inheritance in frames or the construction of a
semantic network; perhaps method combination in flavors qualifies).
I'm not arguing that an FP language could *not* be used to build an AI
language on top; I'm thinking more about fundamental philosophical
differences between different schools of research.
stan the lep hacker
ssc-vax!sts (soon utah-cs)
------------------------------
Date: Sat 20 Aug 83 12:28:17-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: So the language analysis problem has been solved?!?
I will also refrain from flaming, but not from taking to task
excessive claims.
I'll refrain from flaming about traditional (including
logic) grammars. I'm tired of people insisting on a
restricted view of language that claims that grammar rules
are the ultimate description of syntax (semantics being
irrelevant) and that idioms are irritating special cases. I
might note that we have basically solved the language
analysis problem (using a version of Berkeley's Phrase
Analysis that handles ambiguity) ...
I would love to test that "solution of the language analysis
problem"... As for the author being "tired of people insisting on a
restricted ...", he is just tired of his own straw people, because
there doesn't seem to be anybody around anymore claiming that
"semantics is irrelevant". Formal grammars (logic or otherwise) are
just a convenient mathematical technique for representing SOME
regularities in language in a modular and testable form. OF COURSE, a
formal grammar seen from the PROCEDURAL point of view can be replaced
by any arbitrary "ball of string" with the same operational semantics.
What this replacement does to modularity, testability and
reproducibility of results is sadly clear in the large amount of
published "research" in natural language analysis which is untestable
and irreproducible. The methodological failure of this approach
becomes obvious if one considers the analogous proposal of replacing
the principles and equations of some modern physical theory (general
relativity, say) by a computer program which computes "solutions" to
the equations for some unspecified subset of their domain, some of
these solutions being approximate or plain wrong for some (again
unspecified) set of cases. Even if such a program were "right" all the
time (in contradiction with all our experience so far), its sheer
opacity would make it useless as scientific explanation.
Furthermore, when mentioning "semantics", one better say which KIND of
semantics one means. For example, grammar rules fit very well with
various kinds of truth-theoretic and model-theoretic semantics, so the
comment above cannot be about that kind of semantics. Again, a theory
of semantics needs to be testable and reproducible, and, I would
claim, it only qualifies if it allows the representation of a
potential infinity of situation patterns in a finite way.
I don't recall a von Neumann bottleneck in AI programs, at
least not of the kind Backus was talking about. The main
bottleneck seems to be of a conceptual rather than a
hardware nature. After all, production systems are not
inherently bottlenecked, but nobody really knows how to make
them run concurrently, or exactly what to do with the
results (I have some ideas though).
The reason why nobody knows how to make production systems run
concurrently is simply because they use a global state and side
effects. This IS precisely the von Neumann bottleneck, as made clear
in Backus' article, and is a conceptual limitation with hardware
consequences rather than a purely hardware limitation. Otherwise, why
would Backus address the problem by proposing a new LANGUAGE (fp),
rather than a new computer architecture? If your AI program was
written in a language without side effects (such as PURE Prolog), the
opportunities for parallelism would be there. This would be
particularly welcome in natural language analysis with logic (or other
formal) grammars, because dealing with more and more complex subsets
of language needs an increasing number of grammar rules and rules of
inference, if the results are to be accurate and predictable.
Analysis times, even if they are polynomial on the size of the input,
may grow EXPONENTIALLY with the size of the grammar.
Fernando Pereira
AI Center
SRI International
pereira@sri-ai
------------------------------
Date: 15 Aug 83 22:44:05-PDT (Mon)
From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax
Subject: Re: data flow computers and PS's - (nf)
Article-I.D.: uiucdcs.2573
The nodes in a data-flow machine, in order to compute efficiently,
must be able to do a local computation. This is why arithmetic or
logical operations are O.K. to distribute. Your scheme, however,
seems to require that the database of propositions be available to
each node, so that the known facts can be deduced "instantaneously".
This would cause severe problems with the whole idea of concurrency,
because either the database would have to be replicated and passed
through the network, or an elaborate system of memory locks would need
to be established.
The Hearsay system from CMU was one of the early PS's with claims to a
concurrent implementation. There is a paper I remember in IEEE ToC (75
or 76) which discussed the problems of speedup and locks.
Also, I think John Holland (of Michigan?) is currently working on a
parallel PS machine (but doesn't call it that!)
Jordan Pollack
University of Illinois
...!pur-ee!uiucdcs!uicsl!pollack
------------------------------
Date: 17 Aug 83 16:56:55-PDT (Wed)
From: decvax!tektronix!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: data flow computers and PS's - (nf)
Article-I.D.: ssc-vax.426
A concurrent PS is not too impossible, 'cause I've got one
(specialized for NL processing and not actually implemented
concurrently, but certainly capable). It is true that the working
memory would have to be carefully organized, but that's a matter of
sufficiently clever design; there's no fundamental theoretical
problems. Traditional approaches won't work, because two concurrently
operating rules may come to contradictory conclusions, both of which
may be valid. You need a way to store both of these and use them.
stan the leprechaun hacker
ssc-vax!sts (soon utah-cs)
------------------------------
Date: 18 Aug 83 0516 EDT
From: Dave.Touretzky@CMU-CS-A
Subject: NETL
I am a graduate student of Scott Fahlman's, and I've been working on
NETL for the last five years. There are some interesting lessons to
be learned from the history of the NETL project. NETL was a
combination of a parallel computer architecture, called a parallel
marker propagation machine, and a representation language that
appeared to fit well on this architecture. There will probably never
be a hardware implementation of the NETL Machine, although it is
certainly feasible. Here's why...
The first problem with NETL is its radical semantics: no one
completely understands their implications. We (Scott Fahlman, Walter
van Roggen, and I) wrote a paper in IJCAI-81 describing the problems
we had figuring out how exceptions should interact with multiple
inheritance in the IS-A hierarchy and why the original NETL system
handled exceptions incorrectly. We offered a solution in our paper,
but the solution turned out to be wrong. When you consider that NETL
contains many features besides exceptions and inheritance, e.g.
contexts, roles, propositional statements, quantifiers, and so on, and
all of these features can interact (!!), so that a role (a "slot" in
frame lingo) may only exist within certain contexts, and have
exceptions to its existence (not its value, which is another matter)
in certain sub-contexts, and may be mapped multiple times because of
the multiple inheritance feature, it becomes clear just how
complicated the semantics of NETL really is. KLONE is in a similar
position, although its semantics are less radical than NETL's.
Fahlman's book contains many simple examples of network notation
coupled with appeals to the reader's intuition; what it doesn't
contain is a precise mathematical definition of the meaning of a NETL
network because no such definition existed at that time. It wasn't
even clear that a formal definition was necessary, until we began to
appreciate the complexity of the semantic problems. NETL's operators
are *very* nonstandard; NETL is the best evidence I know of that
semantic networks need not be simply notational variants of logic,
even modal or nonmonotonic logics.
In my thesis (forthcoming) I develop a formal semantics for multiple
inheritance with exceptions in semantic network languages such as
NETL. This brings us to the second problem. If we choose a
reasonable formal semantics for inheritance, then inheritance cannot
be computed on a marker propagation machine, because we need to pass
around more information than is possible on such a limited
architecture. The algorithms that were supposed to implement NETL on
a marker propagation machine were wrong: they suffered from race
conditions and other nasty behavior when run on nontrivial networks.
There is a solution called "conditioning" in which the network is
pre-processed on a serial machine by adding enough extra links to
ensure that the marker propagation algorithms always produce correct
results. But the need for serial preprocessing removes much of the
attractiveness of the parallel architecture.
I think the NETL language design stands on its own as a major
contribution to knowledge representation. It raises fascinating
semantic problems, most of which remain to be solved. The marker
propagation part doesn't look too promising, though. Systems with
NETL-like semantics will almost certainly be built in the future, but
I predict they will be built on top of different parallel
architectures.
-- Dave Touretzky
------------------------------
Date: Thu 18 Aug 83 13:46:13-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: NETL and hardware
In Volume 40 of the AIList Alan Glasser asked about hardware
implimentations using marker passing a la NETL. The closest hardware I
am aware of is called the Connection Machine, and is begin developed
at MIT by Alan Bawden, Dave Christman, and Danny Hillis (apologies if
I left someone out). The project involves building a model with about
2↑10 processors. I'm not sure of its current status, though I have
heard that a company is forming to build and market prototype CM's.
I have heard rumors of the SPICE project at CMU, though I am
not aware of any results pertaining to hardware, the project seems to
have some measure of priority at CMU. Hopefully members of each of
these projects will also send notes to AIList...
David Rogers, DRogers@SUMEX-AIM
------------------------------
Date: Thu, 18 Aug 1983 22:01 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
Subject: NETL
I've only got time for a very quick response to Alan Glasser's query
about NETL. Since the book was published we have done the following:
1. Our group at CMU has developed several design sketches for
practical NETL machine implementations of about a million porcessing
elements. We haven't built one yet, for reasons described below.
2. David B. McDonald has done a Ph.D.thesis on noun group
understanding (things like "glass wine glass") using a NETL-type
network to hold the necessary world knowledge. (This is available as
a CMU Tech Report.)
3. David Touretzky has done a through logical analysis of NETL-style
inheritance with exceptions, and is currently writing up his thesis on
this topic.
4. I have been studying the fundamental strengths and limitations of
NETL-like marker-passing compared to other kinds of massively parallel
computation. This has gradually led me to prefer an architecture that
passes numers or continuous values to the single-bit marker-passing of
NETL.
For the past couple of years, I've been putting most of my time into
the Common Lisp effort -- a brief foray into tool building that got
out of hand -- and this has delayed any plans to begin work on a NETL
machine. Now that our Common Lisp is nearly finished, I can think
again about starting a hardware project, but something more exciting
than NETL has come along: the Boltzmann Machine architecture that I am
working on with Geoff Hinton of CMU and Terry Sejnowski of
Johns-Hopkins. We will be presenting a paper on this at AAAI.
Very briefly, the Boltzmann machine is a massively parallel
architecture in which each piece of knowledge is distributed over many
units, unlike NETL in which concepts are associated with particular
pieces of hardware. If we can make it work, this has interesting
implications for reliable large-scale implementation, and it is also a
much more plausible model for neural processing than is something like
NETL.
So that's what has happened to NETL.
-- Scott Fahlman (FAHLMAN@CMU-CS-C)
------------------------------
End of AIList Digest
********************
∂22-Aug-83 1347 LAWS@SRI-AI.ARPA AIList Digest V1 #45
Received: from SRI-AI by SU-AI with TCP/SMTP; 22 Aug 83 13:46:49 PDT
Date: Monday, August 22, 1983 10:08AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #45
To: AIList@SRI-AI
AIList Digest Monday, 22 Aug 1983 Volume 1 : Issue 45
Today's Topics:
Language Translation - Lisp-to-Lisp,
Programming Languages - Lisps on 68000s and SUNs
----------------------------------------------------------------------
Date: 19 Aug 1983 2113-PDT
From: VANBUER@USC-ECL
Subject: Lisp Interchange Standard
In response to your message sent Friday, August 19, 1983 5:26PM
On Lisp translation via a standard form:
I have used Interlisp Transor a fair amount both into and out of
Interlisp (even experimented with translation to C), and the kind of
thing which makes it very difficult, especially if you want to retain
some efficiency, are subtle differences in what seem to be fairly
standard functions: e.g. in Interlisp (DREMOVE (CAR X) X) will be EQ
to X (though not EQUAL or course) except in the case the result is
NIL; both CAR and CDR of the lead cell are RPLACed so that all
references to the value of X also see the DREMOVE as a side effect.
In Franz Lisp, the DREMOVE would have the value (CDR X) in most cases,
but no RPLACing is done. In most cases this isn't a problem, but ....
In APL, at least the majority of the language has the same semantics
in all implementations.
Darrel J. Van Buer, SDC
------------------------------
Date: 20 Aug 1983 1226-PDT
From: FC01@USC-ECL
Subject: Re: Language Translation
I like the APL person's [Shrager's] point of view on translation.
The problem seems to be that APL has all the things it needs in its
primative functions. Lisp implementers have seen fit to impurify
their language by adding so much fancy stuff that they depend on so
heavily. If every lisp program were translated into lisp 1.5 (or
so), it would be easy to port things, but it would end in
innefficient implementations. I like APL, in fact, I like it so much
I've begun maintaining it on our unix system. I've fixed several
bugs, and it now seems to work very well. It has everything any
other APL has, but nobody seems to want to use it except me. I write
simulators in a day, adaptive networks in a week, and analyze
matrices in seconds. So at any rate, anyone who is interested in APL
on the VAX - especially for machine intelligence applications please
get in touch with me. It's not ludicrous by the way, IBM does more
internal R+D in APL than in any other language! That includes their
robotics programs where they do lots of ARM solutions (matrix
manipulation being built into APL has tremendous advantages in this
domain).
FLAME ON!
[I believe this refers the Stan the Leprechaun's submission in
V1 #43. -- KIL]
So if your language translation program is the last word in
translators, how come it's not in the journals? How come nobody knows
that it solves all the problems of translation? How come you haven't
made a lot of money selling COBOL to PASCAL to C to APL to LISP to
ASSEMBLER to BASIC to ... translators in the open market? Is it that
it only works for limited cases? Is it that it only deals with
'natural' languages? Is it really as good as you think, or do you only
think it's really good? How about sharing your (hopefully non
NPcomplete) solution to an NP complete problem with the rest of us!
FLAME OFF!
[...]
Fred
------------------------------
Date: Sat 20 Aug 83 15:18:13-PDT
From: Mabry Tyson <Tyson@SRI-AI.ARPA>
Subject: Lisp-to-Lisp translation
Some of the comments on Lisp-to-Lisp translation seem to be rather
naive. Translating code that works on pure S-expressions is usually
not too difficult. However, Lisp is not pure Lisp.
I am presently translating some code from Interlisp to Zetalisp (from
a Dec-20 to a Symbolics 3600) and thought a few comments might be
appropriate. First off, Interlisp has TRANSOR which is a package to
translate between Lisps and is programmable. It isn't used often but
it does some of the basic translations. There is an Interlisp
Compatability Package(ILCP) on the 3600, which when combined with a
CONVERT program to translate from Interlisp (running in Interlisp),
covers a fair amount of Interlisp. (Unfortunately it is still early
in its development - I just rewrote all the I/O functions because they
didn't work for me.)
Even with these aids there are lots of problems. Here are a few
examples I have come across: In the source language, taking the CAR
of an atom did not cause an error. Apparently laziness prevented the
author from writing code to check whether some input was an atom
(which was legal input) before seeing if the CAR of it was some
special symbol.
Since Interlisp-10 is short of cons-cell room, many relatively obscure
pieces of code were designed to use few conses. Thus the author used
and reused scratch lists and scratch strings. The exact effect
couldn't be duplicated. In particular, he would put characters into
specific spots in the scratch string and then would collect the whole
string. (I'm translating this into arrays.)
All the I/O has to be changed around. The program used screen control
characters to do fancy I/O on the screen. It just printed the right
string to go to whereever it wanted. You can't print a string on the
3600 to do that. Also, whether you get an end-of-line character at
the end of input is different (so I have to hand patch code that did a
(RATOM) (READC)). And of course file names (as well as the default
part of them, ie., the directory) are all different.
Then there are little differences which the compatability package can
take care of but introduce inefficiencies. For instance, the function
which returns the first position of a character in a string is
different between the two lisps because the values returned are off by
1. So, where the author of the program used that function just to
determine whether the character was in the string is now computing the
position and then offsetting it by 1.
The ILCP does have a nice advantage of letting me use the Interlisp
name for functions even though there is a similarly named, but
different, function in Zetalisp.
Unfortunately for me, this code is going to be continued to be
developed on the Dec-20 while we want to get the same code up on the
3600. So I have to try to set it up so the translation can happen
often rather than just once. That means going back to the Interlisp
code and putting it into shape so that a minimum amount of
hand-patching need be done.
------------------------------
Date: 19 Aug 83 10:52:11-PDT (Fri)
From: harpo!eagle!allegra!jdd @ Ucb-Vax
Subject: Lisps on 68000's
Article-I.D.: allegra.1760
A while ago I posted a query about Lisps on 68000's. I got
essentially zero replies, so let me post what I know and see whether
anyone can add to it.
First, Franz Lisp is being ported from the VAX to 68000's. However,
the ratio of random rumors to solid facts concerning this undertaking
seems the greatest since the imminent availability of NIL. Moreover,
I don't really like Franz; it has too many seams showing (I've had too
many programs die without warning from segmentation errors and the
like).
Then there's T. T sounds good, but the people who are saying it's
great are the same ones trying to sell it to me for several thousand
dollars, so I'd like to get some more disinterested opinions first.
The only person I've talked to said it was awful, but he admits he
used an early version.
I have no special knowledge of PSL, particularly of the user
environment or of how useful or standard its dialect looks, nor of the
status of its 68000 version.
As for an eventual Common Lisp on a 68000, well, who knows?
There are also numerous toy systems floating around, but none I would
consider for serious work.
Well, that's about everything I know; can any correct me or add to the
list?
Cheers,
John ("Don't Wanna Program in C") DeTreville
Bell Labs, Murray Hill
[I will reprint some of the recent Info-Grpahics discussion of SUNs
and other workstations as LISP-based graphics servers. Several of
the comments relate to John's query. -- KIL]
------------------------------
Date: Fri, 5 Aug 83 21:30:22 PDT
From: fateman%ucbkim@Berkeley (Richard Fateman)
Subject: SUNs, 3600s, and Lisp
[Reprinted from the Info-Graphics discussion list.]
[...]
In answer to Fred's original query, (I replied to him personally
earlier ), Franz has been running on a SUN since January, 1983. We
find it runs Lisp faster than a VAX 750, and with expected performance
improvements, may be close to a VAX 780. (= about 2.5 to 4 times
slower than a KL-10). This makes it almost irrelevant using Franz on
a VAX. Yet more specifically in answer to FRD's question, Franz on
the SUN has full access to the graphics software on it, and one could
set up inter-process communication between a Franz on a VAX and
something else (e.g. Franz) on a SUN. A system for shipping smalltalk
pictures to SUNs runs at UCB.
Franz runs on other 68000 UNIX workstations, including Pixel, Dual,
and Apple Lisa. Both Interlisp-D and MIT LispMachine Lisp have more
highly developed graphics stuff at the moment.
As far as other lisps, I would expect PSL and T, which run on Apollo
Domain 68000 systems, to be portable towards the SUN, and I would not
be surprised if other systems turn up. For the moment though, Franz
seems to be alone. Most programs run on the SUN without change (e.g.
Macsyma).
------------------------------
Date: Sat 6 Aug 83 13:39:13-PDT
From: Bill Nowicki <NOWICKI@SU-SCORE.ARPA>
Subject: Re: LISP & SUNs ...
[Reprinted from the Info-Graphics discussion list.]
You can certainly run Franz under Unix from SMI, but it is SLOW. Most
Lisps are still memory hogs, so as was pointed out, you need a
$100,000 Lisp machine to get decent response.
If $100,000 is too much for you to spend on each programmer, you might
want to look at what we are doing on the fourth floor here at
Stanford. We are running a small real-time kernel in a cheap, quiet,
diskless SUN, which talks over the network to various servers. Bill
Yeager of Sumex has written a package which runs under interLisp and
talks to our Virtual Graphics Terminal Service. InterLisp can be run
on VAX/Unix or VAX/VMS systems, TOPS-20, or Xerox D machines. The
cost/performance ratio is very good, since each workstation only needs
256K of memory, frame buffer, CPU, and Ethernet interface, while the
DECSystem-20 or VAX has 8M bytes and incredibly fast system
performance (albeit shared between 20 users).
We are also considering both PSL and T since they already have 68000
compilers. I don't know how this discussion got on Info-Graphics.
-- Bill
------------------------------
Date: 6 Aug 1983 1936-MDT
From: JW-Peterson@UTAH-20 (John W. Peterson)
Subject: Lisp Machines
[Reprinted from the Info-Graphics discussion list.]
Folks who don't have >$60K to spend on a Lisp Machine may want to
consider Utah's Portable Standard Lisp (PSL) running on the Apollo
workstation. Apollo PSL has been distributed for several months now.
PSL is a full Lisp implementation, complete with a 68000 Lisp
compiler. The standard distribution also comes with a wide range of
utilities.
PSL has been in use at Utah for almost a year now and is supporting
applications in computer algebra (the Reduce system from Rand), VLSI
design and Computer aided geometric design.
In addition, the Apollo implementation of PSL comes with a large and
easily extensible system interface package. This provides easy,
interactive access to the resident Apollo window package, graphics
library, process communication system and other operating system
services.
If you have any questions about the system, feel free to contact me
via
JW-PETERSON@UTAH-20 (arpa) or
...!harpo!utah-cs!jwp (uucp)
jw
------------------------------
Date: Sun, 7 Aug 83 12:08:08 CDT
From: Mike.Caplinger <mike.rice@Rand-Relay>
Subject: SUNs
[Reprinted from the Inof-Graphics discussion list.]
[...]
Lisp is available from UCB (ftp from ucb-vax) for the SUN and many
simialr 68K-based machines. We have it up on our SMI SUNs running
4.1c UNIX. It seems about as good as Franz on the VAX, which from a
graphics standpoint, is saying nothing at all.
By the way, the SUN graphics library, SUNCore, seems to be an OK
implementation of the SIG Core standard. It has some omissions and
extensions, like every implementation. I haven't used it extensively
yet, and it has some problems, but it should get some good graphics
programs going fairly rapidly. I haven't yet seen a good graphics
demo for the SUN. I hope this isn't indicative of what you can
actually do with one.
By the way, "Sun Workstation" is a registered trademark of Sun
Microsystems, Inc. You may be able to get a "SUN-like" system
elsewhere. I'm not an employee of Sun, I just have to deal with them
a lot...
------------------------------
End of AIList Digest
********************
∂22-Aug-83 1632 ELYSE@SU-SCORE.ARPA Your current address, Visitors
Received: from SU-SCORE by SU-AI with TCP/SMTP; 22 Aug 83 16:32:10 PDT
Date: Mon 22 Aug 83 16:36:10-PDT
From: Elyse Krupnick <ELYSE@SU-SCORE.ARPA>
Subject: Your current address, Visitors
To: CSD-Faculty: ;
Stanford-Phone: (415) 497-9746
There is going to be a reception this fall for faculty and I need your
current address. This will also be used to update the upcoming Faculty/Staff
directory for 1983-84.
Please give me:
1. your name
2. your current home address
3. your home phone #
4. any changes to your title, office address and/or phone #
If you don't want your home address or phone listed in the directory let me
know and I won't include it i in the public directory but I'll still need it
for our own files.
Also, if you have any visitors to the department this fall I'd like to have thththeir names and addresses so I may invite them to the reception. Thanks for your help. Elyse.
-------
∂22-Aug-83 1650 ELYSE@SU-SCORE.ARPA Chairman at North Carolina University
Received: from SU-SCORE by SU-AI with TCP/SMTP; 22 Aug 83 16:50:12 PDT
Date: Mon 22 Aug 83 16:53:20-PDT
From: Elyse Krupnick <ELYSE@SU-SCORE.ARPA>
Subject: Chairman at North Carolina University
To: CSD-Faculty: ;
Stanford-Phone: (415) 497-9746
The Computer Science Department at the University of No. Carolina is seeking
a replacement for Fred Brooks. A flyer announcing the position is in my office.
If you are interested in seeing the flyer, please contact Elyse.
Gene.
-------
∂23-Aug-83 1228 LAWS@SRI-AI.ARPA AIList Digest V1 #46
Received: from SRI-AI by SU-AI with TCP/SMTP; 23 Aug 83 12:27:41 PDT
Date: Tuesday, August 23, 1983 10:53AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #46
To: AIList@SRI-AI
AIList Digest Tuesday, 23 Aug 1983 Volume 1 : Issue 46
Today's Topics:
Artificial Intelligence - Prejudice & Frames & Turing Test & Evolution,
Fifth Generation - Top-Down Research Approach
----------------------------------------------------------------------
Date: Thu 18 Aug 83 14:49:13-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Prejudice
The message from (I think .. apologies if wrong) Stan the Leprechaun,
which sets up "rational thought" as the opposite of "right-wingism"
and of "irascibility", disproves the contention in another message
that "bigotry and intelligence are mutually exclusive". Indeed this
latter message is its own disproof, at least by my definition of
bigotry. All of which leads me to believe that one or other of them
*was* sent by an AI project Flamer-type program. Good work.
- Richard
------------------------------
Date: 22 Aug 83 19:45:38-EDT (Mon)
From: The soapbox of Gene Spafford <spaf%gatech@UDel-Relay>
Subject: AI and Human Intelligence
[The following are excerpts from several interchanges with the author.
-- KIL]
Words mean not necessarily what I want them to mean nor what you want
them to mean, but what we all agree that they mean. My point is that
we must very possibly consider emotions and ethics in any model we
care to construct of a "human" intelligence. The ability to handle a
conversation, as is implied by the Turing test, is not sufficient in
my eyes to classify something as "intelligent." That is, what
*exactly* is intelligence? Is it something measured by an IQ test?
I'm sure you realize that that particular point is a subject of much
conjecture.
If these discussion groups are for discussion of artificial
"intelligence," then I would like to see some thought given as to the
definition of "intelligence." Is emotion part of intelligence? Is
superstition part of intelligence?
FYI, I do not believe what I suggested -- that bigots are less than
human. I made that suggestion to start some comments. I have gotten
some interesting mail from people who have thought some about the
idea, and from a great many people who decided I should be locked away
for even coming up with the idea.
[...]
That brought to mind a second point -- what is human? What is
intelligence? Are the the same thing? (My belief -- no, they aren't.)
I proposed that we might classify "human" as being someone who *at
least tries* to overcome irrational prejudices and bigotry. More than
ever we need such qualitites as open-mindedness and compassion, as
individuals and as a society. Can those qualities be programmed into
an AI system? [...]
My original submission to Usenet was intended to be a somewhat
sarcastic remark about the nonsense that was going on in a few of the
newsgroups. Responses to me via mail indicate that at least a few
people saw through to some deeper, more interesting questions. For
those people who immediately jumped on my case for making the
suggestion, not only did you miss the point -- you *are* the point.
--
The soapbox of Gene Spafford
CSNet: Spaf @ GATech ARPA: Spaf.GATech @ UDel-Relay
uucp: ...!{sb1,allegra,ut-ngp}!gatech!spaf
...!duke!mcnc!msdc!gatech!spaf
------------------------------
Date: 18 Aug 83 13:40:03-PDT (Thu)
From: decvax!linus!vaxine!wjh12!brh @ Ucb-Vax
Subject: Re: AI Projects on the Net
Article-I.D.: wjh12.299
I realize this article was a while ago, but I'm just catching
up with my news reading, after vacation. Bear with me.
I wonder why folks think it would be so easy for an AI program
to "change it's thought processes" in ways we humans can't. I submit
that (whether it's an expert system, experiment in KR or what) maybe
the suggestion to 'not think about zebras' would have a similiar
effect on an AI proj. as on a human. After all, it IS going to have
to decipher exactly what you meant by the suggestion. On the other
hand, might it not be easier for one of you humans .... we, I mean ...
to consciously think of something else, and 'put it out of your
mind'??
Still an open question in my mind... (Now, let's hope this
point isn't already in an article I haven't read...)
Brian Holt
wjh!brh
------------------------------
Date: Friday, 19 Aug 1983 09:39-PDT
From: turner@rand-unix
Subject: Prejudice and Frames, Turing Test
I don't think prejudice is a by-product of Minsky-like frames.
Prejudice is simply one way to be misinformed about the world. In
people, we also connect prejudism with the inability to correct
incorrect information in light of experiences which prove it be wrong.
Nothing in Minsky frames as opposed to any other theory is a
necessary condition for this. In any understanding situation, the
thinker must call on background information, regardless of how that is
best represented. If this background information is incorrect and not
corrected in light of new information, then we may have prejudism.
Of course, this is a subtle line. A scientist doesn't change his
theories just because a fact wanders by that seems to contradict his
theories. If he is wise, he waits until a body of irrefutable
evidence builds up. Is he prejudiced towards his current theories?
Yes, I'd say so, but in this case it is a useful prejudism.
So prejudism is really related to the algorithm for modifying known
information in light of new information. An algorithm that resists
change too strongly results in prejudism. The opposite extreme -- an
algorithm that changes too easily -- results in fadism, blowing the
way the wind blows and so on.
-----------
Stan's point in I:42 about Zeno's paradox is interesting. Perhaps
the mind cast forced upon the AI community by Alan Turing is wrong.
Is Turing's Test a valid test for Artificial Intelligence?
Clearly not. It is a test of Human Mimicry Ability. It is the
assumption that the ability to mimic a human requires intelligence.
This has been shown in the past not to be entirely true; ELIZA is an
example of a program that clearly has no intelligence and yet mimics a
human in a limited domain fairly well.
A common theme in science fiction is "Alien Intelligence". That is,
the sf writer basis his story on the idea: "What if alien
intelligence wasn't like human intelligence?" Many interesting
stories have resulted from this basis. We face a similar situation
here. We assume that Artificial Intelligence will be detectable by
its resemblance to human intelligence. We really have little ground
for this belief.
What we need is a better definition of intelligence, and a test
based on this definition. In the Turing mind set, the definition of
intelligence is "acts like a human being" and that is clearly
insufficient. The Turing test also leads one to think erroneously
that intelligence is a property with two states (intelligent and
non-intelligent) when even amongst humans there is a wide variance in
the level of intelligence.
My initial feeling is to relate intelligence to the ability to
achieve goals in a given environment. The more intelligent man today
is the one who gets what he wants; in short, the more you achieve your
goals, the more intelligent you are. This means that a person may be
more intelligent in one area of life than in another. He is, for
instance, a great businessman but a poor father. This is no surprise.
We all recognize that people have different levels of competence in
different areas.
Of course, this defintion has problems. If your goal is to lift
great weights, then your intelligence may be dependent on your
physical build. That doesn't seem right. Is a chess program more
intelligent when it runs on a faster machine?
In the sense of this definition we already have many "intelligent"
programs in limited domains. For instance, in the domain of
electronic mail handling, there are many very intelligent entities.
In the domain of human life, no electronic entities. In the domain of
human politics, no human entities (*ha*ha*).
I'm sure it is nothing new to say that we should not worry about the
Turing test and instead worry about more practical and functional
problems in the field of AI. It does seem, however, that the Turing
Test is a limited and perhaps blinding outlook onto the AI field.
Scott Turner
turner@randvax
------------------------------
Date: 21 Aug 83 13:01:46-PDT (Sun)
From: harpo!eagle!mhuxt!mhuxi!mhuxa!ulysses!smb @ Ucb-Vax
Subject: Hofstadter
Article-I.D.: ulysses.560
Douglas Hofstadter is the subject of today's N.Y. Times Magazine cover
story. The article is worth reading, though not, of course,
particularly deep technically. Among the points made: that
Hofstadter is not held in high regard by many AI workers, because they
regard him as a popularizer without any results to back up his
theories.
------------------------------
Date: Tue, 23 Aug 83 10:35 PDT
From: "Glasser Alan"@LLL-MFE.ARPA
Subject: Program Genesis
After reading in the New York Times Sunday Magazine of August 21 about
Douglas Hofstadter's latest idea on artificial intelligence arising
from the interplay of lower levels, I was inspired to carry his
suggestion to the logical limit. I wrote the following item partly in
jest, but the idea may have some merit, at least to stimulate
discussion. It was also inspired by Stanislaw Lem's story "Non
Serviam".
------------------------------------------------------------------------
PROGRAM GENESIS
A COMPUTER MODEL OF THE PRIMORDIAL SOUP
The purpose of this program is to model the primordial soup that
existed in the earth's oceans during the period when life first
formed. The program sets up a workspace (the ocean) in which storage
space in memory and CPU time (resources) are available to
self-replicating mod- ules of memory organization (organisms).
Organisms are sections of code and data which, when run, cause copies
of themselves to be written into other regions of the workspace and
then run. Overproduction of species, competition for scarce
resources, and occasional copying errors, either accidental or
deliberately introduced, create all the conditions neces- sary for the
onset of evolutionary processes. A diagnostic package pro- vides an
ongoing picture of the evolving state of the system. The goal of the
project is to monitor the evolutionary process and see what this might
teach us about the nature of evolution. A possible long-range
application is a novel method for producing artificial intelligence.
The novelty is, of course, not complete, since it has been done at
least once before.
------------------------------
Date: 18 Aug 83 11:16:24-PDT (Thu)
From: decvax!linus!utzoo!dciem!mmt @ Ucb-Vax
Subject: Re: Japanese 5th Generation Effort
Article-I.D.: dciem.293
There seems to be an analogy between the 5th generation project and
the ARPA-SUR project on automatic speech understanding of a decade
ago. Both are top-down, initiated with a great deal of hope, and
dependent on solving some "nitty-gritty problems" at the bottom. The
result of the ARPA-SUR project was at first to slow down research in
ASR (automatic speech recognition) because a lot of people got scared
off by finding how hard the problem really is. But it did, as Robert
Amsler suggests the 5th generation project will, show just what
"nitty-gritty problems" are important. It provided a great step
forward in speech recognition, not only for those who continued to
work on projects initiated by ARPA-SUR, but also for those who have
come afterward. I doubt we would now be where we are in ASR if it had
not been for that apparently failed project ten years ago.
(Parenthetically, notice that a lot of the subsequent advances in ASR
have been due to the Japanese, and that European/American researchers
freely use those advances.)
Martin Taylor
------------------------------
End of AIList Digest
********************
∂24-Aug-83 0852 TAJNAI@SU-SCORE.ARPA My talk for Japan
Received: from SU-SCORE by SU-AI with TCP/SMTP; 24 Aug 83 08:52:37 PDT
Date: Wed 24 Aug 83 08:53:04-PDT
From: Carolyn Tajnai <TAJNAI@SU-SCORE.ARPA>
Subject: My talk for Japan
To: faculty@SU-SCORE.ARPA
I'm giving the talk "Links Between Stanford and Industry" on Tuesday,
Aug. 30, Skilling Auditorium, 4:15 p.m. This is a "dry-run" before
leaving for Japan on Sept. 4.
If you have any visitors, they might be interested in attending. You
are invited to attend and critique. I have some interesting historical
slides.
Incidentally, I'll be giving the talk at IBM Japan, Tokyo and Kyoto
Universities and 9 Forum companies. I'll be returning in time for the
new student orientation on Thursday, Sept. 22.
Carolyn
-------
∂24-Aug-83 1130 JF@SU-SCORE.ARPA student support
Received: from SU-SCORE by SU-AI with TCP/SMTP; 24 Aug 83 11:29:49 PDT
Date: Wed 24 Aug 83 11:30:47-PDT
From: Joan Feigenbaum <JF@SU-SCORE.ARPA>
Subject: student support
To: faculty@SU-SCORE.ARPA
cc: atkinson@SU-SCORE.ARPA, mwalker@SU-SCORE.ARPA, bscott@SU-SCORE.ARPA,
yearwood@SU-SCORE.ARPA
At the student meeting in May, 1983, I volunteered to become the
"Fellowship Committee". As I understand it, my responsibility is to
coordinate information about extra-departmental sources of funding available
to PhD students in the CSD and make sure the information is disseminated to the
students in time for them to act on it. (Perhaps there is a sizeable group
of Masters students in need of extra-departmental, non-employer funding, but
I am don't have the time or the knowledge with which to take up their cause.)
During the academic year 1982/83, many fellowships were awarded,
but in at least three cases the eligibility and deadline information was
not given to the students and their recommenders until several days before
(and in one case AFTER) the official deadlines had passed. Needless to
say, this makes it difficult for students who are in the middle of course
work to apply and can make it impossible to get recommendations from
people on the east coast or otherwise not at Stanford.
I have prepared a list of the fellowships currently held by PhD
students that I am aware of. I would like, in conjunction with the
Orientation Committee, to let this fall's entering class know what it
can apply for as soon as possible. I am sure that there are many more
fellowships than I know about, and I would appreciate very much any information
you can provide. Sometime during Spring Quarter of 1984, I plan to quit, and
I think it would be appropriate at that time for these duties to become a very
high priority of a paid staff member, because it is obviously in the
department's best interest to get this job done right.
NSF (fall deadline. there is a special one for minorities, but not
women.)
Hertz (early fall deadline)
IBM (march deadline. is there a special one for women and minorities?)
Xerox (for women and minorities.)
Bell Labs
Please let me know ANY INFORMATION you have about other fellowships that
are already available or sources of funding that you think could become
available if "the department" (personified by whomever seems most appropriate)
went after them. I can assume such administrative responsibilities as
getting the appropriate paper-work to the students and subsequently from the
students to the prospective funders.
At least one of last year's entering PhD students got first-year-only
support from a company which doesn't support a whole lot of our students on a
regular basis. I think that the company was G.E., and I plan to find out for
sure. Paul Armer was the one who arranged it, and he is gone so the
information about such things might be gone as well. Perhaps some more of
these first-year-only fellowships can be obtained, and perhaps more industrial
support for students is out there for the asking. As I said, I think it should
be the explicit responsibility of a paid staff member to go out there and ask
for it--because ANYTHING BEATS T.A.'ing and some r.a.'s as well.
Thanks for your cooperation,
Joan Feigenbaum
(jf@score, diablo, and sail)
-------
∂24-Aug-83 1206 LAWS@SRI-AI.ARPA AIList Digest V1 #47
Received: from SRI-AI by SU-AI with TCP/SMTP; 24 Aug 83 12:05:26 PDT
Date: Wednesday, August 24, 1983 10:34AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #47
To: AIList@SRI-AI
AIList Digest Wednesday, 24 Aug 1983 Volume 1 : Issue 47
Today's Topics:
Request - AAAI-83 Registration,
Logic Programming - PARLOG & PROLOG & LISP Prologs
----------------------------------------------------------------------
Date: 22 Aug 83 16:50:55-PDT (Mon)
From: harpo!eagle!allegra!jdd @ Ucb-Vax
Subject: AAAI-83 Registration
Article-I.D.: allegra.1777
Help! I put off registering for AAAI-83 until too late, and now I
hear that it's overbooked! (I heard 7000 would-be registrants and
1500 places, or some such.) If you're registered but find you can't
attend, please let me know, or if you have any other suggestions, feel
free.
Cheers, John ("Something Wrong With My Planning Heuristics")
DeTreville Bell Labs, Murray Hill
------------------------------
Date: 23 Aug 83 1337 PDT
From: Diana Hall <DFH@SU-AI>
Subject: PARLOG
[Reprinted from the SCORE BBoard.]
Parlog Seminar
Keith Clark will give a seminar on Parlog Thursday, Sept. 1 at 3 p.m
in Room 252 MJH.
PARLOG: A PARALLEL LOGIC PROGRAMMING LANGUAGE
Keith L. Clark
ABSTRACT
PARLOG is a logic programming language in the sense that
nearly every definition and query can be read as a sentence of
predicate logic. It differs from PROLOG in incorporating parallel
modes of evaluation. For reasons of efficient implementation, it
distinguishes and separates and-parallel and or-parallel evaluation.
PARLOG relations are divided into two types: and-relations
and or-relations. A sequence of and-relation calls can be evaluated
in parallel with shared variables acting as communication channels.
Only one solution to each call is computed.
A sequence of or-relation calls is evaluated sequentially but
all the solutions are found by a parallel exploration of the different
evaluation paths. A set constructor provides the main interface
between and-relations and or-relations. This wraps up all the
solutions to a sequence of or-relation calls in a list. The solution
list can be concurrently consumed by an and-relation call.
The and-parallel definitions of relations that will only be
used in a single functional mode can be given using conditional
equations. This gives PARLOG the syntactic convenience of functional
expressions when non-determinism is not required. Functions can be
invoked eagerly or lazily; the eager evaluation of nested function
calls corresponds to and-parallel evaluation of conjoined relation
calls.
This paper is a tutorial introduction and semi-formal
definition of PARLOG. It assumes familiarity with the general
concepts of logic programming.
------------------------------
Date: Thu 18 Aug 83 20:00:36-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: There are Prologs and Prologs ...
In the July issue of SIGART an article by Richard Wallace describes
PiL, yet another Prolog in Lisp. The author claims that his
interpreter shows that "it is easy to extend Lisp to do what Prolog
does."
It is a useful pedagogical exercise for Lisp users interested in logic
programming to look at a simple, clean implementation of a subset of
Prolog in Lisp. A particularly illuminating implementation and
discussion is given in "Structure and Implementation of Computer
Programs", a set of MIT lecture notes by Abelson and Sussman.
However, such simple interpreters (even the Abelson and Sussman one
which is far better than PiL) are not a sufficient basis for the claim
that "it is easy extend Lisp to do what Prolog does." What Prolog
"does" is not just to make certain deductions in a certain order, but
also MAKE THEM VERY FAST. Unfortunately, ALL Prologs in Lisp I know of
fail in this crucial aspect (by factors between 30 and 1000).
Why is speed such a crucial aspect of Prolog (or of Lisp, for that
matter)? First, because the development of complex experimental
programs requires MANY, MANY experiments, which just could not be done
if the systems were, say, 100 times slower than they are. Second,
because a Prolog (Lisp) system needs to be written mostly in Prolog
(Lisp) to support the extensibility that is a central aspect of modern
interactive computing environments.
The following paraphrase of Wallace's claim shows its absurdity: "[LiA
(Lisp in APL) shows] that is easy to extend APL to do what Lisp does."
Really? All of what Maclisp does? All of what ZetaLisp does?
Lisp and Prolog are different if related languages. Both have their
supporters. Both have strengths and (serious) weaknesses. Both can be
implemented with comparable efficiency. It is educational to to look
both at (sub)Prologs in Lisp and (sub)Lisps in Prolog. Let's not claim
discoveries of philosopher's stones.
Fernando Pereira
AI Center
SRI International
------------------------------
Date: Wed, 17 Aug 1983 10:20 EDT
From: Ken%MIT-OZ@MIT-MC
Subject: FOOLOG Prolog
[Reprinted from the PROLOG Digest.]
Here is a small Prolog ( FOOLOG = First Order Oriented LOGic )
written in Maclisp. It includes the evaluable predicates CALL,
CUT, and BAGOF. I will probably permanently damage my reputation
as a MacLisp programmer by showing it, but as an attempt to cut
the hedge, I can say that I wanted to see how small one could
make a Prolog while maintaining efficiency ( approx 2 pages; 75%
of the speed of the Dec-10 Prolog interpreter ). It is actually
possible to squeeze Prolog into 16 lines. If you are interested
in that one and in FOOLOG, I have a ( very ) brief report describing
them that I can send you. Also, I'm glad to answer any questions
about FOOLOG. For me, the best is if you send messages by Snail Mail,
since I do not have a net connection. If that is uncomfortable, you
can also send messages via Ken Kahn, who forwards them.
My address is:
Martin Nilsson
UPMAIL
Computing Science Department
Box 2059
S-750 02 UPPSALA, Sweden
---------- Here is a FOOLOG sample run:
(load 'foolog) ; Lower case is user type-in
; Loading DEFMAX 9844442.
(progn (defpred member ; Definition of MEMBER predicate
((member ?x (?x . ?l)))
((member ?x (?y . ?l)) (member ?x ?l)))
(defpred cannot-prove ; and CANNOT-PROVE predicate
((cannot-prove ?goal) (call ?goal) (cut) (nil))
((cannot-prove ?goal)))
'ok)
OK
(prove (member ?elem (1 2 3)) ; Find elements of the list
(writeln (?elem is an element))))
(1. IS AN ELEMENT)
MORE? t ; Find the next solution
(2. IS AN ELEMENT)
MORE? nil ; This is enough
(TOP)
(prove (cannot-prove (= 1 2)) ; The two cannot-prove cases
MORE? t
NIL
(prove (cannot-prove (= 1 1))
NIL
---------- And here is the source code:
; FOOLOG Interpreter (c) Martin Nilsson UPMAIL 1983-06-12
(declare (special *inf* *e* *v* *topfun* *n* *fh* *forward*)
(special *bagof-env* *bagof-list*))
(defmacro defknas (fun args &rest body)
`(defun ,fun macro (l)
(cons 'progn (sublis (mapcar 'cons ',args (cdr l))
',body))))
; ---------- Interpreter
(setq *e* nil *fh* nil *n* nil *inf* 0
*forward* (munkam (logior 16. (logand (maknum 0) -16.))))
(defknas imm (m x) (cxr x m))
(defknas setimm (m x v) (rplacx x m v))
(defknas makrecord (n)
(loop with r = (makhunk n) and c for i from 1 to (- n 2) do
(setq c (cons nil nil))
(setimm r i (rplacd c c)) finally (return r)))
(defknas transfer (x y)
(setq x (prog1 (imm x 0) (setq y (setimm x 0 y)))))
(defknas allocate nil
(cond (*fh* (transfer *fh* *n*) (setimm *n* 7 nil))
((setq *n* (setimm (makrecord 8) 0 *n*)))))
(defknas deallocate (on)
(loop until (eq *n* on) do (transfer *n* *fh*)))
(defknas reset (e n) (unbind e) (deallocate n) nil)
(defknas ult (m x)
(cond ((or (atom x) (null (eq (car x) '/?))) x)
((< (cadr x) 7)
(desetq (m . x) (final (imm m (cadr x)))) x)
((loop initially (setq x (cadr x)) until (< x 7) do
(setq x (- x 6)
m (or (imm m 7)
(imm (setimm m 7 (allocate)) 7)))
finally (desetq (m . x) (final (imm m x)))
(return x)))))
(defknas unbind (oe)
(loop with x until (eq *e* oe) do
(setq x (car *e*)) (rplaca x nil) (rplacd x x) (pop *e*)))
(defknas bind (x y n)
(cond (n (push x *e*) (rplacd x (cons n y)))
(t (push x *e*) (rplacd x y) (rplaca x *forward*))))
(lap-a-list '((lap final subr) (hrrzi 1 @ 0 (1)) (popj p) nil))
; (defknas final (x) (cdr (memq nil x))) ; equivalent
(defknas catch-cut (v e)
(and (null (and (eq (car v) 'cut) (eq (cdr v) e))) v)))
(defun prove fexpr (gs)
(reset nil nil)
(seek (list (allocate)) (list (car (convq gs nil)))))
(defun seek (e c)
(loop while (and c (null (car c))) do (pop e) (pop c))
(cond ((null c) (funcall *topfun*))
((atom (car c)) (funcall (car c) e (cdr c)))
((loop with rest = (cons (cdar c) (cdr c)) and
oe = *e* and on = *n* and e1 = (allocate)
for a in (symeval (caaar c)) do
(and (unify e1 (cdar a) (car e) (cdaar c))
(setq inf* (1+ *inf*)
*v* (seek (cons e1 e)
(cons (cdr a) rest)))
(return (catch-cut *v* e1)))
(unbind oe)
finally (deallocate on)))))
(defun unify (m x n y)
(loop do
(cond ((and (eq (ult m x) (ult n y)) (eq m n)) (return t))
((null m) (return (bind x y n)))
((null n) (return (bind y x m)))
((or (atom x) (atom y)) (return (equal x y)))
((null (unify m (pop x) n (pop y))) (return nil)))))
; ---------- Evaluable Predicates
(defun inst (m x)
(cond ((let ((y x))
(or (atom (ult m x)) (and (null m) (setq x y)))) x)
((cons (inst m (car x)) (inst m (cdr x))))))
(defun lisp (e c)
(let ((n (pop e)) (oe *e*) (on *n*))
(or (and (unify n '(? 2) (allocate) (eval (inst n '(? 1))))
(seek e c))
(reset oe on))))
(defun cut (e c)
(let ((on (cadr e))) (or (seek (cdr e) c) (cons 'cut on))))
(defun call (e c)
(let ((m (car e)) (x '(? 1)))
(seek e (cons (list (cons (ult m x) '(? 2))) c))))
(defun bagof-topfun nil
(push (inst *bagof-env* '(? 1)) *bagof-list*) nil)
(defun bagof (e c)
(let* ((oe *e*) (on *n*) (*bagof-list* nil)
(*bagof-env* (car e)))
(let ((*topfun* 'bagof-topfun)) (seek e '(((call (? 2))))))
(or (and (unify (pop e) '(? 3) (allocate) *bagof-list*)
(seek e c))
(reset oe on))))
; ---------- Utilities
(defun timer fexpr (x)
(let* ((*rset nil) (*inf* 0) (x (list (car (convq x nil))))
(t1 (prog2 (gc) (runtime) (reset nil nil)
(seek (list (allocate)) x)))
(t1 (- (runtime) t1)))
(list (// (* *inf* 1000000.) t1) 'LIPS (// t1 1000.)
'MS *inf* 'INF)))
(eval-when (compile eval load)
(defun convq (t0 l0)
(cond ((pairp t0) (let* (((t1 . l1) (convq (car t0) l0))
((t2 . l2) (convq (cdr t0) l1)))
(cons (cons t1 t2) l2)))
((null (and (symbolp t0) (eq (getchar t0 1) '/?)))
(cons t0 l0))
((memq t0 l0)
(cons (cons '/? (cons (length (memq t0 l0))
t0)) l0))
((convq t0 (cons t0 l0))))))
(defmacro defpred (pred &rest body)
`(setq ,pred ',(loop for clause in body
collect (car (convq clause nil)))))
(defpred true ((true)))
(defpred = ((= ?x ?x)))
(defpred lisp ((lisp ?x ?y) . lisp))
(defpred cut ((cut) . cut))
(defpred call ((call (?x . ?y)) . call))
(defpred bagof ((bagof ?x ?y ?z) . bagof))
(defpred writeln
((writeln ?x) (lisp (progn (princ '?x) (terpri)) ?y)))
(setq *topfun*
'(lambda nil (princ "MORE? ")
(and (null (read)) '(top))))
------------------------------
Date: Wed, 17 Aug 1983 10:14 EDT
From: Ken%MIT-OZ@MIT-MC
Subject: A Pure Prolog Written In Pure Lisp
[Reprinted from the PROLOG Digest.]
;; The following is a tiny Prolog interpreter in MacLisp
;; written by Ken Kahn.
;; It was inspired by other tiny Lisp-based Prologs of
;; Par Emanuelson and Martin Nilsson
;; There are no side-effects in anywhere in the implementation
;; Though it is very slow of course.
(defun Prolog (database) ;; a top-level loop for Prolog
(prove (list (rename-variables (read) '(0)))
;; read a goal to prove
'((bottom-of-environment)) database 1)
(prolog database))
(defun prove (list-of-goals environment database level)
;; proves the conjunction of the list-of-goals
;; in the current environment
(cond ((null list-of-goals)
;; succeeded since there are no goals
(print-bindings environment environment)
;; the user answers "y" or "n" to "More?"
(not (y-or-n-p "More?")))
(t (try-each database database
(rest list-of-goals) (first list-of-goals)
environment level))))
(defun try-each (database-left database goals-left goal
environment level)
(cond ((null database-left)
()) ;; fail since nothing left in database
(t (let ((assertion
;; level is used to uniquely rename variables
(rename-variables (first database-left)
(list level))))
(let ((new-environment
(unify goal (first assertion) environment)))
(cond ((null new-environment) ;; failed to unify
(try-each (rest database-left)
database
goals-left
goal
environment level))
((prove (append (rest assertion) goals-left)
new-environment
database
(add1 level)))
(t (try-each (rest database-left)
database
goals-left
goal
environment
level))))))))
(defun unify (x y environment)
(let ((x (value x environment))
(y (value y environment)))
(cond ((variable-p x) (cons (list x y) environment))
((variable-p y) (cons (list y x) environment))
((or (atom x) (atom y))
(and (equal x y) environment))
(t (let ((new-environment
(unify (first x) (first y) environment)))
(and new-environment
(unify (rest x) (rest y)
new-environment)))))))
(defun value (x environment)
(cond ((variable-p x)
(let ((binding (assoc x environment)))
(cond ((null binding) x)
(t (value (second binding) environment)))))
(t x)))
(defun variable-p (x) ;; a variable is a list beginning with "?"
(and (listp x) (eq (first x) '?)))
(defun rename-variables (term list-of-level)
(cond ((variable-p term) (append term list-of-level))
((atom term) term)
(t (cons (rename-variables (first term)
list-of-level)
(rename-variables (rest term)
list-of-level)))))
(defun print-bindings (environment-left environment)
(cond ((rest environment-left)
(cond ((zerop
(third (first (first environment-left))))
(print
(second (first (first environment-left))))
(princ " = ")
(prin1 (value (first (first environment-left))
environment))))
(print-bindings (rest environment-left) environment))))
;; a sample database:
(setq db '(((father jack ken))
((father jack karen))
((grandparent (? grandparent) (? grandchild))
(parent (? grandparent) (? parent))
(parent (? parent) (? grandchild)))
((mother el ken))
((mother cele jack))
((parent (? parent) (? child))
(mother (? parent) (? child)))
((parent (? parent) (? child))
(father (? parent) (? child)))))
;; the following are utilities
(defun first (x) (car x))
(defun rest (x) (cdr x))
(defun second (x) (cadr x))
(defun third (x) (caddr x))
------------------------------
End of AIList Digest
********************
∂24-Aug-83 1321 BSCOTT@SU-SCORE.ARPA Re: student support
Received: from SU-SCORE by SU-AI with TCP/SMTP; 24 Aug 83 13:21:04 PDT
Date: Wed 24 Aug 83 13:24:49-PDT
From: Betty Scott <BSCOTT@SU-SCORE.ARPA>
Subject: Re: student support
To: JF@SU-SCORE.ARPA, faculty@SU-SCORE.ARPA
cc: atkinson@SU-SCORE.ARPA, mwalker@SU-SCORE.ARPA, yearwood@SU-SCORE.ARPA,
BSCOTT@SU-SCORE.ARPA
In-Reply-To: Message from "Joan Feigenbaum <JF@SU-SCORE.ARPA>" of Wed 24 Aug 83 11:31:05-PDT
Joan,
I'm sorry about the lack of communication concerning possible fellowships.
After Paul Armer left it took us awhile to try to pull the loose ends to-
gether.
Anyway, Carolyn Tajnai will be coordinating all fellowship efforts on behalf
of the department, so you may wish to talk with her. We are trying to
route all information concerning all fellowships to her.
Betty
-------
∂24-Aug-83 1847 BRODER@SU-SCORE.ARPA AFLB
Received: from SU-SCORE by SU-AI with TCP/SMTP; 24 Aug 83 18:47:46 PDT
Return-Path: <PHY@SU-AI>
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Tue 23 Aug 83 14:53:29-PDT
Date: 23 Aug 83 1447 PDT
From: Phyllis Winkler <PHY@SU-AI>
Subject: AFLB
To: su-bboards@SU-AI
ReSent-date: Wed 24 Aug 83 18:45:28-PDT
ReSent-from: Andrei Broder <Broder@SU-SCORE.ARPA>
ReSent-to: aflb.su@SU-SCORE.ARPA
Algorithms for Lunch Bunch has a special meeting on
Thursday, August 25, 1983, at 12:30
MJH 352
Prof. Claus Schnorr of Johann Wolfgang Goethe University
will talk on the
`Monte Carlo factoring algorithm'.
∂25-Aug-83 0755 GOLUB@SU-SCORE.ARPA Brooks Vote
Received: from SU-SCORE by SU-AI with TCP/SMTP; 25 Aug 83 07:55:40 PDT
Date: Thu 25 Aug 83 08:02:42-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Brooks Vote
To: Academic-Council: ;
Unfortunately, the Provost refuses to send the Rod Brooks papers on to the
Advisory Board without a more complete vote from the faculty. There had been
a faculty meeting in the spring where there was a positive vote on Brooks
but not all of you were able to attend.
IT IS URGENT THAT WE OBTAIN YOUR VOTE.
*YES←←←←←←←
*NO←←←←←←←←
*ABSTAIN←←←←←←←←←←
Elyse has the completed file on Brooks in her office. Please get your vote
to me by Monday, August 29, noon.
Many thanks, GENE
-------
∂25-Aug-83 1057 LAWS@SRI-AI.ARPA AIList Digest V1 #48
Received: from SRI-AI by SU-AI with TCP/SMTP; 25 Aug 83 10:56:54 PDT
Date: Thursday, August 25, 1983 9:14AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #48
To: AIList@SRI-AI
AIList Digest Thursday, 25 Aug 1983 Volume 1 : Issue 48
Today's Topics:
AI Literature - Journals & COMTEX & Online Reports,
AI Architecture - The Connection Machine,
Programming Languages - Scheme and Lisp Availability,
Artificial Intelligence - Turing Test & Hofstadter Article
----------------------------------------------------------------------
Date: 20 Aug 1983 0011-MDT
From: Jed Krohnfeldt <KROHNFELDT@UTAH-20>
Subject: Re: AI Journals
I would add one more journal to the list:
Cognition and Brain Theory
Lawrence Erlbaum Associates, Inc.
365 Broadway,
Hillsdale, New Jersey 07642
$18 Individual $50 Instititional
Quarterly
Basic cognition, proposed models and discussion of
consciousness and mental process, epistemology - from frames to
neurons, as related to human cognitive processes. A "fringe"
publication for AI topics, and a good forum for issues in cognitive
science/psychology.
Also, I notice that the institutional rate was quoted for several of
the journals cited. Many of these journals can be had for less if you
convince them that you are a lone reader (individual) and/or a
student.
[Noninstitutional members of AAAI can get the Artificial Intelligence
Journal for $50. See the last page of the fall AI Magazine.
Another journal for which I have an ad is
New Generation Computing
Springer-Verlag New York Inc.
Journal Fulfillment Dept.
44 Hartz Way
Secaucus, NJ 07094
A quarterly English-language journal devoted to international
research on the fifth generation computer. [It seems to be
very strong on hardware and logic programming.]
1983 - 2 issues - $52. (Sample copy free.)
1984 - 4 issues - $104.
-- KIL]
------------------------------
Date: Sun 21 Aug 83 18:06:52-PDT
From: Robert Amsler <AMSLER@SRI-AI>
Subject: Journal listings
Computing Reviews, Nov. 1982, lists all the periodicals they receive
and their addresses. Handy list of a lot of CS journals.
------------------------------
Date: Tue, 23 Aug 83 11:05 EDT
From: Tim Finin <Tim%UPenn@UDel-Relay>
Subject: COMTEX and getting AI technical reports
There WAS a company which offered a service in which subscribers would
get copies of recent technical reports on all areas of AI research -
COMTEX. The reports were to be drawn from universities and
institutions doing AI research. The initial offering in the series
contained old Stanford and MIT memos. The series was intended to
provide very timely access to current reaseach in the participating
institution. COMTEX has decided to discontinue the AI series, however.
Perhaps if they perceive an increased demand for this series they will
reactivate it.
Tim
[There is a half-page Comtex ad for the MIT and Stanford memoranda in
the Fall issue of AI Magazine, p. 79. -- KIL]
------------------------------
Date: 19 Aug 83 19:21:34 PDT (Friday)
From: Hamilton.ES@PARC-MAXC.ARPA
Subject: On-line tech reports?
I raised this issue on Human-nets nearly two years ago and didn't seem
to get more than a big yawn for a response.
Here's an example of what I had to go through recently: I saw an
interesting-looking CMU tech report (Newell, "Intellectual Issues in
the History of AI") listed in SIGART News. It looked like I could
order it from CMU. No ARPANET address was listed, so I wrote -- I
even gave them my ARPANET address. They sent me back a form letter
via US Snail referring me to NTIS. So then I phoned NTIS. I talked
to an answering machine and left my US Snail address and the order
number of the tech report. They sent me back a postcard giving the
price, something like $7. I sent them back their order form,
including my credit card#. A week or so later I got back a moderately
legible document, probably reproduced from microfiche, that looks
suspiciously like a Bravo document that's probably on line somewhere,
if I only knew where. I'm not picking on CMU -- this is a general
problem.
There's GOT to be a better way. How about: (1) Have a standard
directory at each major ARPA host, containing at least a catalog with
abstracts of all recent tech reports, and info on how to order, and
hopefully full text of at least the most recent and/or popular ones,
available for FTP, perhaps at off-peak hours only. (2) Hook NTIS into
ARPANET, so that folks could browse their catalogs and submit orders
electronically.
RUTGERS used to have an electronic mailing list to which they
periodically sent updated tech report catalogs, but that's about the
only activity of this sort that I've seen.
We've got this terrific electronic highway. Let's make it useful for
more than mailing around collections of flames, like this one!
--Bruce
------------------------------
Date: 23 August 1983 00:22 EDT
From: Alan Bawden <ALAN @ MIT-MC>
Subject: The Connection Machine
Date: Thu 18 Aug 83 13:46:13-PDT
From: David Rogers <DRogers at SUMEX-AIM.ARPA>
The closest hardware I am aware of is called the Connection
Machine, and is begin developed at MIT by Alan Bawden, Dave
Christman, and Danny Hillis ...
also Tom Knight, David Chapman, Brewster Kahle, Carl Feynman, Cliff
Lasser, and Jon Taft. Danny Hillis provided the original ideas, his
is the name to remember.
The project involves building a model with about 2↑10 processors.
The prototype Connection Machine was designed to have 2↑20 processors,
although 2↑10 might be a good size to actually build to test the idea.
One way to arrive at a superficial understanding of the Connection
Machine would be to imagine augmenting a NETL machine with the ability
to pass addresses (or "pointers") as well as simple markers. This
permits the Connection Machine to perform even more complex pattern
matching on semantic-network-like databases. The detection of any
kind of cycle (find all people who are employed by their own fathers),
is the canonical example of something this extension allows.
But thats only one way to program a Connection Machine. In fact, the
thing seems to be a rather general parallel processor.
MIT AI Memo #646, "The Connection Machine" by Danny Hillis, is still a
perfectly good reference for the general principles behind the
Connection Machine, despite the fact that the hardware design has
changed a bit since it was written. (The memo is currently being
revised.)
------------------------------
Date: 22 August 1983 18:20 EDT
From: Hal Abelson <HAL @ MIT-MC>
Subject: Lisps on 68000
At MIT we are working on a version of Scheme (a lexically scoped
dialect of Lisp) that runs on the HP 9836 computer, which is a 68000
machine. Starting 3 weeks from now, 350 MIT students will be using
this system on a full-time basis.
The implementation consists of a kernel written in 68000 assembler,
with most of the system written in Scheme and compiled using a quick
and dirty compiler, which is also written in Scheme. The
implementation sits inside of HP's UCSD-Pascal-clone operating system.
For an editor, we use NMODE, which is a version of EMACS written in
Portable Standard Lisp. Thus our machines run, at present, with both
Scheme and PSL resident, and consequently require 4 megabytes of main
memory. This will change when we get another editor, which will be at
least a few months.
The current system gives good performance for coursework, and is
optimized to provide fast interpreted code, as well as a good
debugging environment for student use.
Work will begin on a serious compiler as soon as the start-of-semester
panic is over. There will also be a compatible version for the Vax.
Distribution policy has not yet been decided upon, but most likely we
will give the system away (not the PSL part, which is not ours to
give) to anyone who wants it, provided that people who get it agree to
return all improvements to MIT.
Please no requests for a few months, though, since we are still making
changes in the design and documentation. Availibility will be
annouced on this mailing list..
------------------------------
Date: 23 Aug 83 16:36:26-PDT (Tue)
From: harpo!seismo!rlgvax!cvl!umcp-cs!mark @ Ucb-Vax
Subject: Franz lisp on a Sun Workstation.
Article-I.D.: umcp-cs.2096
So what is the true story? What person says it is almost as fast as
a single user 780, another says it is an incredible hog. These can't
both be right, as a Vax-780 IS at least as fast as a Lispmachine (not
counting the bitmapped screen). It sounded to me like the person who
said it was fast had actually used it, but the person who said it was
slow was just working from general knowledge. So maybe it is fast.
Wouldn't that be nice.
--
spoken: mark weiser
UUCP: {seismo,allegra,brl-bmd}!umcp-cs!mark
CSNet: mark@umcp-cs
ARPA: mark.umcp-cs@UDel-Relay
------------------------------
Date: Tue 23 Aug 83 14:43:50-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: in defense of Turing
Scott Turner (AIList V1 #46) has some interesting points about
intelligence, but I felt compelled to defend Turing in his absence.
The Turing article in Mind (must reading for any AIer) makes it clear
that the test is not proposed to *define* an intelligent system, or
even to *recognize* one; the claim is merely that a system which *can*
pass the test has intelligence. Perhaps this is a subtle difference,
but it's as important as the difference between "iff" and "if" in
math.
Scott bemoans the Turing test as testing for "Human Mimicing
Ability", and suggests that ELIZA has shown this to be possible
without intelligence. ELIZA has fooled some people, though I would not
say it has passed anything remotely like the Turing test. Mimicing
language is a far cry from mimicing intelligence.
In any case, it may be even more difficult to detect
intelligence without doing a comparison to human intellect; after all,
we're the only intelligent systems we know of...
Regards,
David
------------------------------
Date: Tue 23 Aug 83 19:23:00-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Hofstadter article
Alas, after reading the article about Hofstadter in the
NYTimes, I realized that AI workers can be at least as closeminded as
other scientists have shown. At its bottom level, it seemed that DH's
basic feeling (that we have a long way to go before creating real
intelligence) is embarrassingly obvious. In the long run, the false
hopes that expectations of quick results give rise to can only hurt
the acceptance of AI in people's minds.
(By the way, I thought the article was very well written, and
would encourage people to look it up. The report is spiced with
opinions from AI workers such as Alan Newell and Marvin Minsky, and it
was enjoyable to hear their candid comments about Hofstadter and AI in
general. Quite a step above the usual articles designed for general
consumption about AI...)
David R.
------------------------------
End of AIList Digest
********************
∂25-Aug-83 1444 BRODER@SU-SCORE.ARPA ISL Seminar
Received: from SU-SCORE by SU-AI with TCP/SMTP; 25 Aug 83 14:44:50 PDT
Date: Thu 25 Aug 83 14:44:28-PDT
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: ISL Seminar
To: aflb.all@SU-SCORE.ARPA
Stanford-Office: MJH 325, Tel. (415) 497-1787
SPECIAL SEMINAR
An Efficient Signature Scheme based on Quadratic Forms
H. Ong and C. P. Schnorr
Fachbereich Mathematik
Universitat Frankfurt
Date: Monday, August 29, 1983
Time: 1:15 pm
Place: Durand 450
Abstract
We propose a signature scheme where the private key is a random
(n,n)-matrix T with coefficients in Z←m = Z/mZ , where m is a product
of two large primes. The corresponding public key is A,m where A =
transp(T)*T. A signature y of a message z is any y in (Z←m)↑n such
that transp(y)*A*y aproximates z , that is |z-y↑TAy| < 4m↑{2↑{-n+1}} .
Messages z can be efficiently signed using the secret key T and by
approximating z as a sum of squares. Even tighter approximations
|z-y↑TAy| can be achieved by tight signature procedures. Heuristical
arguments show that forging signatures is not easier than factoring m
. The prime decomposition of m is not needed for signing message,
however knowledge of this prime decomposition enables forging
signatures. Distinct participants of the system may share the same
modulus m provided that its prime dcomposition is unknown. Our
signature scheme is faster than the RSA-scheme.
-------
∂25-Aug-83 1525 BRODER@SU-SCORE.ARPA Duplication of messages
Received: from SU-SCORE by SU-AI with TCP/SMTP; 25 Aug 83 15:25:29 PDT
Date: Thu 25 Aug 83 15:24:18-PDT
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Duplication of messages
To: aflb.local@SU-SCORE.ARPA
Stanford-Office: MJH 325, Tel. (415) 497-1787
I am sorry for the duplication of messages send to the AFLB list.
There is a Stanford-Berkeley loop. I am trying to fix it.
Andrei
-------
∂26-Aug-83 1339 GOLUB@SU-SCORE.ARPA acting chairman
Received: from SU-SCORE by SU-AI with TCP/SMTP; 26 Aug 83 13:39:17 PDT
Date: Fri 26 Aug 83 13:39:12-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: acting chairman
To: faculty@SU-SCORE.ARPA
I am pleased to say that Bob Floyd will be acting chairman of the department
during my visit to China, Aug 30 to Sept 20.
GENE
-------
∂29-Aug-83 1311 LAWS@SRI-AI.ARPA AIList Digest V1 #49
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Aug 83 13:09:16 PDT
Date: Monday, August 29, 1983 11:08AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #49
To: AIList@SRI-AI
AIList Digest Monday, 29 Aug 1983 Volume 1 : Issue 49
Today's Topics:
Conferences - AAAI-83 Registration,
Bindings - Rog-O-Matic & Mike Mauldin,
Artificial Languages - Loglan,
Knowledge Representation & Self-Consciousness - Textnet,
AI Publication - Corporate Constraints,
Lisp Availability - PSL on 68000's,
Automatic Translation - Lisp-to-Lisp & Natural Language
----------------------------------------------------------------------
Date: 23 Aug 83 11:04:22-PDT (Tue)
From: decvax!linus!philabs!seismo!rlgvax!cvl!umcp-cs!arnold@Ucb-Vax
Subject: Re: AAAI-83 Registration
Article-I.D.: umcp-cs.2093
If there will be over 7000 people attending AAAi←83,
then there will almost be as many people as will
attend the World Sci. Fic. Convention.
I worked registration for AAAI-83 on Aug 22 (Monday).
There were about 700 spaces available, along with about
1700 people who pre-registered.
[...]
--- A Volunteer
------------------------------
Date: 26 Aug 83 2348 EDT
From: Rudy.Nedved@CMU-CS-A
Subject: Rog-O-Matic & Mike Mauldin
Apparently people want something related to Rog-O-Matic and are
sending requests to "Maudlin". If you notice very closely that is not
how his name is spelled. People are transposing the "L" and the "D".
Hopefully this message will help the many people who are trying to
send Mike mail.
If you still can't get his mailing address right, try
"mlm@CMU-CS-CAD".
-Rudy
A CMU Postmaster
------------------------------
Date: 28 August 1983 06:36 EDT
From: Jerry E. Pournelle <POURNE @ MIT-MC>
Subject: Loglan
I've been interested in LOGLANS since Heinlein's GULF which was in
part devoted to it. Alas, nothing seems to happen that I can use; is
the institute about to publish new materials? Is there anything in
machine-readable form using Loglans? Information appreciated. JEP
------------------------------
Date: 25-Aug-83 10:03 PDT
From: Kirk Kelley <KIRK.TYM@OFFICE-2>
Subject: Re: Textnet
Randy Trigg mentioned his "Textnet" thesis project a few issues back
that combines hypertext and NLS/Augment structures. He makes a strong
statement about distributed Textnet on worldnet:
There can be no mad dictator in such an information network.
I am interested in building a testing ground for statements such as
that. It would contain a model that would simulate the global effects
of technologies such as publishing on-line. Here is what may be of
interest to the AI community. The simulation would be a form of
"augmented global self-consciousness" in that it models its own
viability as a service published on-line via worldnet. If you have
heard of any similar project or might be interested in collaborating
on this one, let me know.
-- kirk
------------------------------
Date: 25 Aug 83 15:47:19-PDT (Thu)
From: decvax!microsoft!uw-beaver!ssc-vax!tjj @ Ucb-Vax
Subject: Re: Language Translation
Article-I.D.: ssc-vax.475
OK, you turned your flame-thrower on, now prepare for mine! You want
to know why things don't get published -- take a look at your address
and then at mine. You live (I hope I'm not talking to an AI Project)
in the academic community; believe it or not there are those of us
who work in something euphemistically refered to as industry where
the rule is not publish or perish, the rule is keep quiet and you are
less likely to get your backside seared! Come on out into the 'real'
world where technical papers must be reviewed by managers that don't
know how to spell AI, let alone understand what language translation
is all about. Then watch as two of them get into a moebius argument,
one saying that there is nothing classified in the paper but there is
proprietary information, while the other says no proprietary but it
definitely is classified! All the while this is going on the
deadline for submission to three conferences passes by like the
perennial river flowing to the sea. I know reviews are not unheard
of in academia, and that professors do sometimes get into arguments,
but I've no doubt that they would be more generally favorable to
publication than managers who are worried about the next
stockholder's meeting.
It ain't all that bad, but at least you seem to need a wider
perspective. Perhaps the results haven't been published; perhaps the
claims appear somewhat tentative; but the testing has been critical,
and the only thing left is primarily a matter of drudgery, not
innovative research. I am convinced that we may certainly find a new
and challenging problem awaiting us once that has been done, but at
least we are not sitting around for years on end trying to paste
together a grammar for a context
sensitive language!!
Ted Jardine
TJ (with Amazing Grace) The Piper
ssc-vax!tjj
------------------------------
Date: 24 Aug 83 19:47:17-PDT (Wed)
From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax
Subject: Re: Lisps on 68000's - (nf)
Article-I.D.: uiucdcs.2626
I played with a version of PSL on a HP 9845 for several hours one
day. The environment was just like running FranzLisp under Emacs in
"electric-lisp" mode. (However, the editor is written in PSL itself,
so it is potentially much more powerful than the emacs on our VAX,
with its screwy c/mock-lisp implementation.) The language is in the
style of Maclisp (rather than INTERLISP) and uses standard scoping
(rather than the lexical scoping of T). The machine has 512 by 512
graphics and a 2.5 dimensional window system, but neither are as
fully integrated into the programming environment as on a Xerox
Dolphin. Although I have no detailed benchmarks, I did port a
context-free chart parser to it. The interpreter speed was not
impressive, but was comparable with interpreted Franz on a VAX.
However, the speed of compiled code was very impressive. The compiler
is incremental, and built-in to the lisp system (like in INTERLISP),
and caused about a 10-20 times speedup over interpreted code (my
estimate is that both the Franz and INTERLISP-d compilers only net
2-5 times speedup). As a result, the compiled parser ran much faster
on the 68000 than the same compiled program on a Dolphin.
I think PSL is definitely a superior lisp for the 68000, but I have
no idea whether is will be available for non-HP machines...
Jordan Pollack
University of Illinois
...pur-ee!uiucdcs!uicsl!pollack
------------------------------
Date: 24 Aug 83 16:20:12-PDT (Wed)
From: harpo!gummo!whuxlb!floyd!vax135!cornell!uw-beaver!ssc-vax!sts@Ucb-Vax
Subject: Re: Lisp-to-Lisp translation
Article-I.D.: ssc-vax.468
These problems just go to show what AI people have known for years
(ever since the first great bust of machine translation) - ya can't
translate without understanding what yer translating. Optimizing
compilers are often impressive encodings of expert coders' knowledge,
and they are for very simple languages - not like Interlisp or English
stan the lep hacker
ssc-vax!sts (soon utah-cs)
------------------------------
Date: 24 Aug 83 16:12:59-PDT (Wed)
From: harpo!floyd!vax135!cornell!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: Language Translation
Article-I.D.: ssc-vax.467
You have heard of my parser. It's a variant on Berkeley's PHRAN, but
has been improved to handle arbitrarily ambiguous sentences. I
submitted a paper on it to AAAI-83, but it was rejected (well, I did
write it in about 3 days - wasn't very good). A paper will be
appearing at the AIAA Computers in Aerospace conference in October.
The parser is only a *basic* solution - I suppose I should have made
that clearer. Since it is knowledge-based, it needs **lots** of
knowledge. Right now we're working on ways to acquire linguistic
knowledge automatically (Selfridge's work is very interesting). The
knowledge base is woefully small, but we don't anticipate any problems
expanding it (famous last words!).
The parser has just been released for use within Boeing ("just"
meaning two days ago), and it may be a while before it becomes
available elsewhere (sorry). I can mail details on it though.
As for language analysis being NP-complete, yes you're right. But are
you sure that humans don't brute-force the process, and that computers
won't have to do the same?
stan the lep hacker
ssc-vax!sts (soon utah-cs)
ps if IBM is using APL, that explains a lot (I'm a former MVS victim)
------------------------------
Date: 24 Aug 83 15:47:11-PDT (Wed)
From: harpo!gummo!whuxlb!floyd!vax135!cornell!uw-beaver!ssc-vax!sts@Ucb-Vax
Subject: Re: So the language analysis problem has been solved?!?
Article-I.D.: ssc-vax.466
Heh-heh. Thought that'd raise a few hackles (my boss didn't approve
of the article; oh well. I tend to be a bit fiery around the edges).
The claim is that we have "basically" solved the problem. Actually,
we're not the only ones - the APE-II parser by Pazzani and others from
the Schank school have also done the same thing. Our parser can
handle arbitrarily ambiguous sentences, generating *all* the possible
meanings, limited only by the size of its knowledge base. We have the
capability to do any sort of idiom, and mix any number of natural
languages. Our problems are really concerned with the acquisition of
linguistic knowledge, either by having nonspecialists put it in by
hand (*everyone* is an expert on the native language) or by having the
machine acquire it automatically. We can mail out some details if
anyone is interested.
One advantage we had is starting from ground zero, so we had very few
preconceptions about how language analysis ought to be done, and
scanned the literature. It became apparent that since we were
required to handle free-form input, any kind of grammar would
eventually become less than useful and possibly a hindrance to
analysis. Dr. Pereira admits as much when he says that grammars only
reflect *some* aspects of language. Well, that's not good enough. Us
folks in applied research can't always afford the luxury of theorizing
about the most elegant methods. We need something that models human
cognition closely enough to make sense to knowledge engineers and to
users. So I'm sort of in the Schank camp (folks at SRI hate 'em)
although I try to keep my thinking as independent as possible (hard
when each camp is calling the other ones charlatans; I'll post
something on that pernicious behavior eventually).
Parallel production systems I'll save for another article...
stan the leprechaun hacker
ssc-vax!sts (soon utah-cs)
ps I *did* read an article of Dr. Pereira's - couldn't understand the
point. Sorry. (perhaps he would be so good as to explain?)
[Which article? -- KIL]
------------------------------
Date: 26 Aug 83 11:19-EST (Fri)
From: Steven Gutfreund <gutfreund%umass-cs@UDel-Relay>
Subject: Musings on AI and intelligence
Spafford's musings on intelligent communications reminded me of an
article I read several years ago by John Thomas (then at T.J. Watson,
now at White Plains, a promotion as IBM sees it).
In the paper he distinguishes between two distinct approaches (or
philosophies) at raising the level man/machine communication.
[Natural langauge recognition is one example of this problem. Here the
machine is trying to "decipher" the user's natural prose to determine
the desired action. Another application are intelligent interfaces
that attempt to decipher user "intentions"]
The Human Approach -
Humans view communication as inherently goal based. When one
communicates with another human being, there is an explicit goal -> to
induce a cognitive state in the OTHER. This cognitive state is usually
some function of the communicators cognitive state. (usually the
identity function, since one wants the OTHER to understand what one is
thinking). In this approach the medium of communication (words, art,
gestulations) are not the items being communicated, they are
abstractions meant to key certain responses in the OTHER to arrive at
the desired goal.
The Mechanistic Approach
According to Thomas this is the approach taken by natural language
recognition people. Communication is the application of a decrypto
function to the prose the user employed. This approach is inherently
flawed, according to Thomas, since the actually words and prose do not
contain meaning in themselves but are tools for efecting cognitive
change. Thus, the text of one of Goebell's propaganda speeches can
not be examined in itself to determine what it means, but one needs an
awareness of the cognitive models, metaphors, and prejudices of the
speaker and listeners. Capturing this sort of real world knowledge
(biases, prejudices, intuitive feelings) is not a stong point of the
AI systems. Yet, the extent that certain words move a person, may be
much more highly dependent on, say his Catholic upbringing than the
words themselves.
If one doubts the above thesis, then I encourage you to read Thomas
Kuhn's book "the Sturcture of Scientific Revolutions" and see how
culture can affect the interpretation of supposedly hard scientific
facts and observations.
Perhaps something that best brings this out was an essay (I believe it
was by Smuyllian) in "The Mind's Eye" (Dennet and Hofstadter). In this
essay a homunculus is set up with the basic tools of one of Schank's
language understanding systems (scripts, text, rules, etc.) He then
goes about the translation of the text from one language to another
applying a set of mechanistic transformation rules. Given that the
homunculus knows nothing of either the source language or the target
language, can you say that it has any understanding of what the script
was about? How does this differ from today's NUR systems?
- Steven Gutfreund
Gutfreund.umass@udel-relay
------------------------------
End of AIList Digest
********************
∂29-Aug-83 1458 @SU-SCORE.ARPA:RFN@SU-AI
Received: from SU-SCORE by SU-AI with TCP/SMTP; 29 Aug 83 14:57:51 PDT
Delivery-Notice: While sending this message to SU-AI.ARPA, the
SU-SCORE.ARPA mailer was obliged to send this message in 50-byte
individually Pushed segments because normal TCP stream transmission
timed out. This probably indicates a problem with the receiving TCP
or SMTP server. See your site's software support if you have any questions.
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Mon 29 Aug 83 14:51:32-PDT
Date: 29 Aug 83 1449 PDT
From: Rosemary Napier <RFN@SU-AI>
To: faculty@SU-SCORE
TO: Dave Cheriton
FROM: Bob Floyd
It seems clear to me that you misunderstood what Donald Kennedy was talking
about; as the word "smokeless" implies, he was saying that physical production
(eg, agriculture) and physical services would continue to be a major economic
component. When he said we cannot live in the Mandevillian hive, he meant that
it was not possible; if he wanted to say that it was undesirable, he would have
used a different auxilliary verb than "cannot."
Since he made no assertions at all about software designers, I assume that you
take any mention at all as personally insulting.
The suggestion that there is a danger "in" (of?) half the population becoming
software designers was ironic, exaggerated, like the suggestion that half the
inhabitants would be venture capitalists; it is merely a literary device, of
the sort frequently used and understood by "humanists and academics."
Be not alarmed.
Kennedy's suggestion that students devote a portion of their lives to public
service was not a suggestion that they devote their whole lives to public
service (i.e., get a government job). Kennedy, who served as head of
the FDA for several years, is himself a paridigm, like Charles E. Wilson,
Henry Kissinger, Daniel P. Moynihan, and David Packard, of the talented
amateurs who fertilize the governmentwith innovation, for better or worse.
Let's drop the issue, lest the more traditional academic disciplines form an
unfounded opinion that computer scientists can't read carefully or make fine
distinctions.
∂30-Aug-83 1143 LAWS@SRI-AI.ARPA AIList Digest V1 #50
Received: from SRI-AI by SU-AI with TCP/SMTP; 30 Aug 83 11:42:56 PDT
Date: Tuesday, August 30, 1983 10:16AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #50
To: AIList@SRI-AI
AIList Digest Tuesday, 30 Aug 1983 Volume 1 : Issue 50
Today's Topics:
AI Literature - Bibliography Request,
Intelligence - Definition & Turing Test & Prejudice & Flamer
----------------------------------------------------------------------
Date: 29 Aug 1983 11:05:14-PDT
From: Susan L Alderson <mccarty@Nosc>
Reply-to: mccarty@Nosc
Subject: Help!
We are trying to locate any and all bibliographies, in electronic
form, of AI and Robotics. I know that this covers a broad spectrum,
but we would rather have too many things to choose from than none at
all. Any help or leads on this would be greatly appreciated.
We are particularly interested in:
AI Techniques
Vision Analysis
AI Languages
Robotics
AI Applications
Speech Analysis
AI Environments
AI Systems Support
Cybernetics
This is not a complete list of our interests, but a good portion of
the high spots!
susie (mccarty@nosc-cc)
[Several partial bibilographies have been published in AIList; more
would be most welcome. Readers able to provide pointers should reply
to AIList as well as to Susan.
Many dissertation and report abstracts have been published in the
SIGART newsletter; online copies may exist. Individual universities
and corporations also maintain lists of their own publications; CMU,
MIT, Stanford, and SRI are among the major sources in this country.
(Try Navarro@SRI-AI for general AI and CPowers@SRI-AI for robotics
reports.)
One of the fastest ways to compile a bibliography is to copy author's
references from the IJCAI and AAAI conference proceedings. The AI
Journal and other AI publications are also good. Beware of straying
too far from your main topics, however. Rosenfeld's vision and image
processing bibliographies in CVGIP (Computer Vision, Graphics, and
Image Processing) list over 700 articles each year.
-- KIL]
------------------------------
Date: 25 Aug 1983 1448-PDT
From: Jay <JAY@USC-ECLC>
Subject: intelligence is...
An intelligence must have at least three abilities; To act; To
perceive, and classify (as one of: better, the same, worse) the
results of its actions, or the environment after the action; and
lastly To change its future actions in light of what it has perceived,
in attempt to maximize "goodness", and avoid "badness". My views are
very obviously flavored by behaviorism.
In defense of objections I hear coming... To act is necessary for
intelligence, since it is pointless to call a rock intelligent since
there seems to be no way to detect it. To perceive is necessary of
intelligence since otherwise projectiles, simple chemicals, and other
things that act following a set of rules, would be classified as
intelligent. To change future actions is the most important since a
toaster could perceive that it was overheating, oxidizing its heating
elements, and thus dying, but would be unable to stop toasting until
it suffered a breakdown.
In summary (NOT (AND actp percievep evolvep)) -> (NOT intelligent),
or Action, Perception, and Evolution based upon perception is
necessary for intelligence. I *believe* that these conditions are
also sufficient for intelligence.
awaiting flames,
j'
PS. Yes, the earth's bio-system IS intelligent.
------------------------------
Date: 25 Aug 83 2:00:58-PDT (Thu)
From: harpo!gummo!whuxlb!pyuxll!ech @ Ucb-Vax
Subject: Re: Prejudice and Frames, Turing Test
Article-I.D.: pyuxll.403
The characterization of prejudice as an unwillingness/inability
to adapt to new (contradictory) data is an appealing one.
Perhaps this belongs in net.philosophy, but it seems to me that a
requirement for becoming a fully functional intelligence (human
or otherwise) is to abandon the search for compact, comfortable
"truths" and view knowledge as an approximation and learning as
the process of improving those approximations.
There is nothing wrong with compact generalizations: they reduce
"overhead" in routine situations to manageable levels. It is when
they are applied exclusively and/or inflexibly that
generalizations yield bigotry and the more amusing conversations
with Eliza et al.
As for the Turing test, I think it may be appropriate to think of
it as a "razor" rather than as a serious proposal. When Turing
proposed the test there was a philosophical argument raging over
the definition of intelligence, much of which was outright
mysticism. The famous test cuts the fog nicely: a device needn't
have consciousness, a soul, emotions -- pick your own list of
nebulous terms -- in order to function "intelligently." Forget
whether it's "the real thing," it's performance that counts.
I think Turing recognized that, no matter how successful AI work
was, there would always be those (bigots?) who would rip the back
off the machine and say, "You see? Just mechanism, no soul,
no emotions..." To them, the Turing test replies, "Who cares?"
=Ned=
------------------------------
Date: 25 Aug 83 13:47:38-PDT (Thu)
From: harpo!floyd!vax135!cornell!uw-beaver!uw-june!emma @ Ucb-Vax
Subject: Re: Prejudice and Frames, Turing Test
Article-I.D.: uw-june.549
I don't think I can accept some of the comments being bandied about
regarding prejudice. Prejudice, as I understand the term, refers to
prejudging a person on the basis of class, rather than judging that
person as an individual. Class here is used in a wider sense than
economic. Examples would be "colored folk got rhythm" or "all them
white saxophonists sound the same to me"-- this latter being a quote
from Miles Davis, by the way. It is immediately apparent that
prejudice is a natural result of making generalizations and
extrapolating from experience. This is a natural, and I would suspect
inevitable, result of a knowledge acquisition process which
generalizes.
Bigotry, meanwhile, refers to inflexible prejudice. Miles has used a
lot of white saxophonists, as he recognizes that some don't all sound
the same. Were he bigoted, rather than prejudiced, he would refuse to
acknowledge that. The problem lies in determining at what point an
apparent counterexample should modify a conception. Do we decide that
gravity doesn't work for airplanes, or that gravity always works but
something else is going on? Do we decide that a particular white sax
man is good, or that he's got a John Coltrane tape in his pocket?
In general, I would say that some people out there are getting awfully
self-righteous regarding a phenomenon that ought to be studied as a
result of our knowledge acquisition process rather than used to
classify people as sub-human.
-Joe P.
------------------------------
Date: 25 Aug 83 11:53:10-PDT (Thu)
From: decvax!linus!utzoo!utcsrgv!utcsstat!laura@Ucb-Vax
Subject: AI and Human Intelligence [& Editorial Comment]
Goodness, I stopped reading net.ai a while ago, but had an ai problem
to submit and decided to read this in case the question had already
been asked and answered. News here only lasts for 2 weeks, but things
have changed...
At any rate, you are all discussing here what I am discussing in mail
to AI types (none of whom mentioned that this was going on here, the
cretins! ;-) ). I am discussing bigotry by mail to AI folk.
I have a problem in furthering my discussion. When I mentioned it I
got the same response from 2 of my 3 AI folk, and am waiting for the
same one from the third. I gather it is a fundamental AI sort of
problem.
I maintain that 'a problem' and 'a discription of a problem' are not
the same thing. Thus 'discrimination' is a problem, but the word
'nigger' is not. 'Nigger' is a word which describes the problem of
discrimination. One may decide not to use the word 'nigger' but
abolishing the word only gets rid of one discription of the problem,
but not the problem itself.
If there were no words to express discrimination, and discrimination
existed, then words would be created (or existing words would be
perverted) to express discrimination. Thus language can be counted
upon to reflect the attitudes of society, but changing the language is
not an effective way to change society.
This position is not going over very well. I gather that there is some
section of the AI community which believes that language (the
description of a problem) *is* the problem. I am thus reduced to
saying, "oh no it isnt't you silly person" but am left holding the bag
when they start quoting from texts. I can bring out anthropology and
linguistics and they can get out some epistomology and Knowledge
Representation, but the discussion isn't going anywhere...
can anybody out there help?
laura creighton
utzoo!utcsstat!laura
[I have yet to be convinced that morality, ethics, and related aspects
of linguistics are of general interest to AIList readers. While I
have (and desire) no control over the net.ai discussion, I am
responsible for what gets passed on to the Arpanet. Since I would
like to screen out topics unrelated to AI or computer science, I may
choose not to pass on some of the net.ai submissions related to
bigotry. Contact me at AIList-Request@SRI-AI if you wish to discuss
this policy. -- KIL]
------------------------------
Date: 25 Aug 1983 1625-PDT
From: Jay <JAY@USC-ECLC>
Subject: [flamer@ida-no: Re: Turing Test; Parry, Eliza, and Flamer]
Is this a human response??
j'
---------------
Return-path: <flamer%umcp-cs%UMCP-CS@UDel-Relay>
Received: from UDEL-RELAY by USC-ECLC; Thu 25 Aug 83 16:20:32-PDT
Date: 25 Aug 83 18:31:38 EDT (Thu)
From: flamer@ida-no
Return-Path: <flamer%umcp-cs%UMCP-CS@UDel-Relay>
Subject: Re: Turing Test; Parry, Eliza, and Flamer
To: jay@USC-ECLC
In-Reply-To: Message of Tue, 16-Aug-83 17:37:00 EDT from
JAY%USC-ECLC@sri-unix.UUCP <4325@sri-arpa.UUCP>
Via: UMCP-CS; 25 Aug 83 18:55-EDT
From: JAY%USC-ECLC@sri-unix.UUCP
. . . Flamer would read messages from the net and then
reply to the sender/bboard denying all the person said,
insulting him, and in general making unsupported statements.
. . .
Boy! Now that's the dumbest idea I've heard in a long time. Only an
idiot such as yourself, who must be totally out of touch with reality,
could come up with that. Besides, what would it prove? It's not much
of an accomplishment to have a program which is stupider than a human.
The point of the Turing test is to demonstrate a program that is as
intelligent as a human. If you can't come up with anything better,
stay off the net!
------------------------------
End of AIList Digest
********************
∂30-Aug-83 1825 LAWS@SRI-AI.ARPA AIList Digest V1 #51
Received: from SRI-AI by SU-AI with TCP/SMTP; 30 Aug 83 18:22:27 PDT
Date: Tuesday, August 30, 1983 4:30PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #51
To: AIList@SRI-AI
AIList Digest Wednesday, 31 Aug 1983 Volume 1 : Issue 51
Today's Topics:
Expert Systems - Availability & Dissent,
Automatic Translation - State of the Art,
Fifth Generation - Book Review & Reply
----------------------------------------------------------------------
Date: 26 Aug 83 17:00:18-PDT (Fri)
From: decvax!ittvax!dcdwest!benson @ Ucb-Vax
Subject: Expert Systems
Article-I.D.: dcdwest.216
I would like to know whether there are commercial expert
systems available for sale. In particular, I would like to
know about systems like the Programmer's Apprentice, or other
such programming aids.
Thanks in advance,
Peter Benson
!decvax!ittvax!dcdwest!benson
------------------------------
Date: 26 Aug 83 11:12:31-PDT (Fri)
From: decvax!genrad!mit-eddie!rh @ Ucb-Vax
Subject: bulstars
Article-I.D.: mit-eddi.656
from AP (or NYT?)
COMPUTER TROUBLESHOOTER:
'Artificially Intelligent' Machine Analyses Phone Trouble
WASHINGTON - Researchers at Bell Laboratories say
they've developed an ''artificially intelligent'' computer
system that works like a highly trained human analyst to
find troublespots within a local telephone network. Slug
PM-Bell Computer. New, will stand. 670 words.
Oh, looks like we beat the Japanese :-( Why weren't we told that
'artificial intelligence' was about to exist? Does anyone know if
this is the newspaper's fault, or if the guy they talked to just
wanted more attention???
-- Randwulf
(Randy Haskins);
Path= genrad!mit-eddie!rh
or... rh@mit-ee (via mit-mc)
------------------------------
Date: Mon 29 Aug 83 21:36:04-CDT
From: Jonathan Slocum <LRC.Slocum@UTEXAS-20.ARPA>
Subject: claims about "solving NLP"
I have never been impressed with claims about "solving the Natural
Language Processing problem" based on `solutions' for 1-2 paragraphs
of [usu. carefully (re)written] text. There are far too many scale-up
problems for such claims to be taken seriously. How many NLP systems
are there that have been applied to even 10 pages of NATURAL text,
with the full intent of "understanding" (or at least "treating in the
identical fashion") ALL of it? Very few. Or 100 pages? Practically
none. Schank & Co.'s "AP wire reader," for example, was NOT intended
to "understand" all the text it saw [and it didn't!], but only to
detect and summarize the very small proportion that fell within its
domain -- a MUCH easier task, esp. considering its miniscule domain
and microscopic dictionary. Even then, its performance was -- at best
-- debatable.
And to anticipate questions about the texts our MT system has been
applied to: about 1,000 pages to date -- NONE of which was ever
(re)written, or pre-edited, to affect our results. Each experiment
alluded to in my previous msg about MT was composed of about 50 pages
of natural, pre-existing text [i.e., originally intended and written
for HUMAN consumption], none of which was ever seen by the project
linguists/programmers before the translation test was run. (Our
dictionaries, by the way, currently comprise about 10,000 German
words/phrases, and a similar number of English words/phrases.)
We, too, MIGHT be subject to further scale-up problems -- but we're a
damned sight farther down the road than just about any other NLP
project has been, and have good reason to believe that we've licked
all the scale-up problems we'll ever have to worry about. Even so, we
would NEVER be so presumptuous as to claim to have "solved the NLP
problem," needing only a large collection of `linguistic rules' to
wrap things up!!! We certainly have NOT done so.
REALLY, now...
------------------------------
Date: Mon 29 Aug 83 17:11:26-CDT
From: Jonathan Slocum <LRC.Slocum@UTEXAS-20.ARPA>
Subject: Machine Translation - a very short tutorial
Before proclaiming the impossibility of automatic [i.e., computer]
translation of human languages, it's perhaps instructive to know
something about how human translation IS done -- and is not done -- at
least in places where it's taken seriously. It is also useful,
knowing this, to propose a few definitions of what may be counted as
"translation" and -- more to the point -- "useful translation."
Abbreviations: MT = Machine Translation; HT = Human Translation.
To start with, the claim that "a real translator reads and understands
a text, and then generates [the text] in the [target] language" is
empty. First, NO ONE really has anything like a good idea of HOW
humans translate, even though there are schools that "teach
translation." Second, all available evidence indicates that (point #1
notwithstanding), different humans do it differently. Third, it can
be shown (viz simultaneous interpreters) that nothing as complicated
as "understanding" need take place in all situations. Fourth,
although the contention that "there generally aren't 1-1
correspondences between words, phrases..." sounds reasonable, it is
in fact false an amazing proportion of the time, for languages with
similar derivational histories (e.g., German & English, to say nothing
of the Romance languages). Fifth, it can be shown that highly
skilled, well-respected technical-manual translators do not always (if
ever) understand the equipment for which they're translating manuals
[and cannot, therefore, be argued to understand the original texts in
any fundamentally deep sense] -- and must be "understanding" in a
shallower, probably more "linguistic" sense (one perhaps more
susceptible to current state-of-the-art computational treatment).
Now as to how translation is performed in practice. One thing to
realize here is that, at least outside the U.S. [i.e., where
translation is taken seriously and where almost all of it is done], NO
HUMAN performs "unrestricted translation" -- i.e., human translators
are trained in (and ONLY considered competent in) a FEW AREAS.
Particularly in technical translation, humans are trained in a limited
number of related fields, and are considered QUITE INCOMPETENT outside
those fields. Another thing to realize is that essentially ALL
TRANSLATIONS ARE POST-EDITED. I refer here not to stylistic editing,
but to editing by a second translator of superior skill and
experience, who NECESSARILY refers to the original document when
revising his subordinate's translation. The claim that MT is
unacceptable IF/BECAUSE the results must be post-edited falls to the
objection that HT would be unacceptable by the identical argument.
Obviously, HT is not considered unacceptable for this reason -- and
therefore, neither should MT. All arguments for acceptablility then
devolve upon the question of HOW MUCH revision is necessary, and HOW
LONG it takes.
Happily, this is where we can leave the territory of pontifical
pronouncements (typically utterred by the un- or ill-informed), and
begin to move into the territory of facts and replicable experiments.
Not entirely, of course, since THERE IS NO SUCH THINGS AS A PERFECT
TRANSLATION and, worse, NO ONE CAN DEFINE WHAT CONSTITUTES A GOOD
TRANSLATION. Nevertheless, professional post-editors are regularly
saddled with the burden of making operational decisions about these
matters ("Is this sufficiently good that the customer is likely to
understand the text? Is it worth my [company's] time to improve it
further?"). Thus we can use their decisions (reflected, e.g., in
post-editing time requirements) to determine the feasibility of MT in
a more scientific manner; to wit: what are the post-editing
requirements of MT vs. HT? And in order to assess the economic
viability of MT, one must add: taking all expenses into account, is MT
cost-effective [i.e., is HT + human revision more or less expensive
than MT + human revision]?
Re: these last points, our experimental data to date indicate that (1)
the absolute post-editing requirements (i.e., something like "number
of changes required per sentence") for MT are increased w.r.t. HT
[this is no surprise to anyone]; (2) paradoxically, post-editing time
requirements of MT is REDUCED w.r.t. HT [surprise!]; and (3) the
overall costs of MT (including revision) are LESS than those for HT
(including revision) -- a significant finding.
We have run two major experiments to date [with our funding agency
collecting the data, not the project staff], BOTH of which produced
these results; the more recent one naturally produced better results
than the earlier one, and we foresee further improvements in the near
future. Our finding (2) above, which SEEMS inconsistent with finding
(1), is explainable with reference to the sociology of post-editing
when the original translator is known to be human, and when he will
see the results (which probably should, and almost always does,
happen). Further details will appear in the literature.
So why haven't you heard about this, if it's such good news? Well,
you just did! More to the point, we have been concentrating on
producing this system more than on writing papers about it [though I
have been presenting papers at COLING and ACL conferences], and
publishing delays are part of the problem [one reason for having
conferences]. But more papers are in the works, and the secret will
be out soon enough.
------------------------------
Date: 26 Aug 83 1209 PDT
From: Jim Davidson <JED@SU-AI>
Subject: Fifth Generation (Book Review)
[Reprinted from the SCORE BBoard.]
14 Aug 8
by Steven Schlossstein
(c) 1983 Dallas Morning News (Independent Press Service)
THE FIFTH GENERATION: Artificial Intelligence and Japan's Computer
Challenge to the World. By Edward Feigenbaum and Pamela McCorduck
(Addison-Wesley, $15.55).
(Steven Schlossstein lived and worked in Japan with a major Wall
Street firm for more than six years. He now runs his own Far East
consulting firm in Princeton, N.J. His first novel, ''Kensei,-' which
deals with the Japanese drive for industrial supremacy in the high
tech sector, will be published by Congdon & Weed in October).
''Fukoku Kyohei'' was the rallying cry of Meiji Japan when that
isolated island country broke out of its self-imposed cultural cocoon
in 1868 to embark upon a comprehensive plan of modernization to catch
up with the rest of the world.
''Rich Country, Strong Army'' is literally what is meant.
Figuratively, however, it represented Japan's first experimentation
with a concept called industrial policy: concentrating on the
development of strategic industries - strategic whether because of
their connection with military defense or because of their importance
in export industries intended to compete against foreign products.
Japan had to apprentice herself to the West for a while to bring
it off.
The military results, of course, were impressive. Japan defeated
China in 1895, blew Russia out of the water in 1905, annexed Korea and
Taiwan in 1911, took over Manchuria in 1931, and sat at the top of the
Greater East Asia Co-Prosperity Sphere by 1940. This from a country
previously regarded as barbarian by the rest of the world.
The economic results were no less impressive. Japan quickly became
the world's largest shipbuilder, replaced England as the world's
leading textile manufacturer, and knocked off Germany as the premier
producer of heavy industrial machinery and equipment. This from a
country previously regarded as barbarian by the rest of the world.
After World War II, the Ministry of Munitions was defrocked and
renamed the Ministry of International Trade and Industry (MITI), but
the process of strategy formulation remained the same.
Only the postwar rendition was value-added, and you know what
happened. Japan is now the world's No. 1 automaker, produces more
steel than anyone else, manufactures over half the TV sets in the
world, is the only meaningful producer of VTRs, dominates the 64K
computer chip market, and leads the way in one branch of computer
technology known as artificial intelligence (AI). All this from a
country previously regarded as barbarbian by the rest of the world.
What next for Japan? Ed Feigenbaum, who teaches computer science
at Stanford and pioneered the development of AI in this country, and
Pamela McCorduck, a New York-based science writer, write that Japan is
trying to dominate AI research and development.
AI, the fifth generation of computer technology, is to your
personal computer as your personal computer is to pencil and paper. It
is based on processing logic, rather than arithmetic, deals in
inferences, understands language and recognizes pictures. Or will. It
is still in its infancy. But not for long; last year, MITI established
the Institute for New Generation Computer Technology, funded it
aggressively, and put some of the country's best brains to work on AI.
AI systems consist of three subsystems: a knowledge base needed
for problem solving and understanding, an inference subsystem that
determines what knowledge is relevant for solving the problem at hand,
and an interaction subsystem that facilitates communication between
the overall system and its user - between man and machine.
Now America does not have a MITI, does not like industrial policy,
has not created an institute to work on AI, and is not even convinced
that AI is the way to go. But Feigenbaum and McCorduck argue that even
if the Japanese are not successful in developing the fifth generation,
the spin-off from this 10-year project will be enormous, with
potentially wide applications in computer technology,
telecommunications, industrial robotics, and national defense.
''The Fifth Generation'' walks you through AI, how and why Japan
puts so much emphasis on the project, and how and why the Western
nations have failed to respond to the challenge. National defense
implications alone, the authors argue, are sufficient to justify our
taking AI seriously.
Smart bombs and laser weapons are but advanced wind-up toys
compared with the AI arsenal of the future. The Pentagon has a little
project called ARPA - Advanced Research Projects Agency - that has
been supporting AI small-scale, but not with the people or funding the
authors feel is meaningful.
Unfortunately, ''The Fifth Generation'' suffers from some
organizational defects. You don't really get into AI and how its
complicated systems operate until you're almost halfway through the
book. And the chapter on industrial policy - from which all
technological blessings flow - is only three pages long. It's also at
the back of the book instead of up front, where it belongs.
But the issues are highlighted well by experts who are not only
knowledgeable about AI but who are concerned about our lack of
response to yet another challenge from Japan. The author's depiction
of the drivenness of the Japanese is especially poignant. It all boils
down to national survival.
Japan no longer is in a position of apprenticeship to the West.
[Begin garbage]
The D B LD LEAJE OW IN A EMBARRUSSINOF STRATEGIC INDUSDRIES. EAgain1u
2, with few exceptions and shampoo, but it's not trying harder - if at
all.
[End garbage]
mount an effective reaponse to the Japanese challenge? ''The
Fifth Generation'' doesn't think so, and for compelling reasons. Give
it a read.
END
------------------------------
Date: Fri 26 Aug 83 15:40:16-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM>
Subject: Re: Fifth Generation (Book Review)
[Reprinted from the SCORE BBoard.]
Anybody who says the Japanese are *leading* in "one branch of computer
technology known as artificial intelligence" is out to lunch. And by
what standards is DARPA describable as small? And what is all this
BirdSong about other countries failing to "respond to the challenge"?
Hasn't this turkey read the Alvey report? Hasn't he noticed France's
vigorous encouragement of their domestic computer industry? Who in
America is not "convinced that AI is the way to go" (this was true of
the leadership in Britain until the Alvey report came out, I admit)
and what are they doing to hinder AI work? Does he think 64k RAMs are
the only things that go into computers? Does he, incidentally, know
that AI has had plenty of pioneers outside of the HPP?
More to the point, most of you know about the wildly over-optimistic
promises that were made in the 60's on behalf of AI, and what happened
in their wake. Whipping up public hysteria is a dangerous game,
especially when neither John Q. Public nor Malcolm Forbes himself can
do very much about the 5GC project, except put pressure on the local
school board to teach the kids some math and science.
- Richard
------------------------------
End of AIList Digest
********************
∂02-Sep-83 1043 LAWS@SRI-AI.ARPA AIList Digest V1 #53
Received: from SRI-AI by SU-AI with TCP/SMTP; 2 Sep 83 10:42:04 PDT
Date: Thursday, September 1, 1983 2:02PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #53
To: AIList@SRI-AI
AIList Digest Friday, 2 Sep 1983 Volume 1 : Issue 53
Today's Topics:
Conferences - AAAI-83 Attendance & Logic Programming,
AI Publications - Artificial Intelligence Journal & Courseware,
Artificial Languages - LOGLAN,
Lisp Availbility - PSL & T,
Automatic Translation - Ada Request,
NL & Scientific Method - Rebuttal,
Intelligence - Definition
----------------------------------------------------------------------
Date: 31 Aug 83 0237 EDT
From: Dave.Touretzky@CMU-CS-A
Subject: AAAI-83 registration
The actual attendance at AAAI-83 was about 2000, plus an additional
1700 people who came only for the tutorials. This gives a total of
3700. While much less than the 7000 figure, it's quite a bit larger
than last year's attendance. Interest in AI seems to be growing
rapidly, spurred partly by media coverage, partly by interest in
expert systems and partly by the 5th generation thing. Another reason
for this year's high attendance was the Washington location. We got
tons of government people.
Next year's AAAI conference will be hosted by the University of Texas
at Austin. From a logistics standpoint, it's much easier to hold a
conference in a hotel than at a university. Unfortunately, I'm told
there are no hotels in Austin big enough to hold us. Such is the
price of growth.
-- Dave Touretzky, local arrangements committee member, AAAI-83 & 84
------------------------------
Date: Thu 1 Sep 83 09:15:17-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Logic Programming Symposium
This is a reminder that the September 1 deadline for submissions to
the IEEE Logic Programming Symposium, to be held in Atlantic City,
New Jersey, February 6-9, 1984, has now all but arrived. If you are
planning to submit a paper, you are urged to do so without further
delay. Send ten double-spaced copies to the Technical Chairman:
Doug DeGroot, IBM Watson Research Center
PO Box 218, Yorktown Heights, NY 10598
------------------------------
Date: Wed, 31 Aug 83 12:10 PDT
From: Bobrow.PA@PARC-MAXC.ARPA
Subject: Subscriptions to the Artificial Intelligence Journal
Individual (non institutions) belonging to the AAAI, to SIGART or
to AISB can receive a reduced rate personal subscription to the
Artificial Intelligence Journal. To apply for a subscription, send a
copy of your membership form with a check for $50 (made out to
Elsevier) to:
Elsevier Science Publishers
Attn: John Tagler
52 Vanderbilt Avenue
New York, New York 10017
North Holland (Elsevier) will acknowledge receipt of the request for
subscription, and provide information about which issues will be
included in your subscription, and when they should arrive. Back
issues are not available at the personal rate.
Artificial Intelligence, an International journal, has been the
journal of record for the field of Artificial Intelligence since
1970. Articles for submission should be sent (three copies) to Dr.
Daniel G. Bobrow, Editor-in-chief, Xerox Palo Alto Research Center,
3333 Coyote Hill Road, Palo Alto, California 94304, or to Prof.
Patrick J. Hayes, Associate Editor, Computer Science Department,
University of Rochester, Rochester N.Y. 14627.
danny bobrow
------------------------------
Date: 31 Aug 1983 17:10:40 EDT (Wednesday)
From: Marshall Abrams <abrams at mitre>
Subject: College-level courseware publishing
I have learned that Addison-Wesley is setting up a new
courseware/software operation and are looking for microcomputer
software packages at the college level. I think the idea is for a
student to be able to go to the bookstore and buy a disk and
instruction manual for a specific course.
Further details on request.
------------------------------
Date: 29 Aug 1983 2154-PDT
From: VANBUER@USC-ECL
Subject: Re: LOGLAN
[...]
The Loglan institute is in the middle of a year long "quiet spell"
After several years of experiments with sounds, patching various small
logical details (e.g. two unambiguous ways to say "pretty little
girls"'s two interpretations), the Institute is busily preparing
materials on the new version, preparing to "go public" again in a
fairly big way.
Darrel J. Van Buer
------------------------------
Date: 30 Aug 1983 0719-MDT
From: Robert R. Kessler <KESSLER@UTAH-20>
Subject: re: Lisps on 68000's
Date: 24 Aug 83 19:47:17-PDT (Wed)
From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax
Subject: Re: Lisps on 68000's - (nf)
Article-I.D.: uiucdcs.2626
....
I think PSL is definitely a superior lisp for the 68000, but I
have no idea whether is will be available for non-HP machines...
Jordan Pollack
University of Illinois
...pur-ee!uiucdcs!uicsl!pollack
Yes, PSL is available for other 68000's, particularly the Apollo. It
is also being released for the DecSystem-20 and Vax running 4.x Unix.
Send queries to
Cruse@Utah-20
Bob.
------------------------------
Date: Tue, 30 Aug 1983 14:32 EDT
From: MONTALVO@MIT-OZ
Subject: Lisps on 68000's
From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax
Subject: Re: Lisps on 68000's - (nf)
Article-I.D.: uiucdcs.2626
I played with a version of PSL on a HP 9845 for several hours one
day. The environment was just like running FranzLisp under Emacs
in ...
A minor correction so people don't get confused: it was probably an
HP 9836 not an HP 9845. I've used both machines including PSL on the
36, and doubt very much that PSL runs on a 45.
------------------------------
Date: Wed, 31 Aug 83 01:25:29 EDT
From: Jonathan Rees <Rees@YALE.ARPA>
Subject: Re: Lisps on 68000's
Date: 19 Aug 83 10:52:11-PDT (Fri)
From: harpo!eagle!allegra!jdd @ Ucb-Vax
Subject: Lisps on 68000's
Article-I.D.: allegra.1760
... T sounds good, but the people who are saying it's
great are the same ones trying to sell it to me for several
thousand dollars, so I'd like to get some more disinterested
opinions first. The only person I've talked to said it was
awful, but he admits he used an early version.
T is distributed by Yale for $75 to universities and other non-profit
organizations.
Yale has not yet decided on the means by which it will distribute T to
for-profit institutions, but it has been negotiating with a few
companies, including Cognitive Systems, Inc. To my knowledge no final
agreements have been signed, so right now, no one can sell it.
"Supported" versions will be available from commercial outfits who are
willing to take on the extra responsibility (and reap the profits?),
but unsupported versions will presumably still be available directly
from Yale.
Regardless of the final outcome, no company or companies will have
exclusive marketing rights. We do not want a high price tag to
inhibit availability.
Jonathan Rees
T Project
Yale Computer Science Dept.
P.S. As a regular T user, I can say that it is a good system. As its
principal implementor, I won't claim to be disinterested.
Testimonials from satisfied users may be found in previous AILIST
digests; perhaps you can obtain back issues.
------------------------------
Date: 1 Sep 1983 11:58-EDT
From: Dan Hoey <hoey@NRL-AIC>
Subject: Translation into Ada: Request for Info
It is estimated that the WMCCS communications system will require five
years to translate into Ada. Not man-years, but years; if the
staffing is assumed to exceed two hundred then we are talking about a
man-millenium for this task.
Has any work been done on mechanical aids for translating programs
into Ada? I seek pointers to existing and past projects, or
assurances that no work has been done in this area. Any pointers to
such information would be greatly appreciated.
To illustrate my lack of knowledge in this field, the only work I have
heard of for translating from one high-level language to another is
UniLogic's translator for converting BLISS to PL/1. As I understand
it, their program only works on the Scribe document formatter but
could be extended to cover other programs. I am interested in hearing
of other translators, especially those for translating into
strongly-typed languages.
Dan Hoey HOEY@NRL-AIC.ARPA
------------------------------
Date: Wed 31 Aug 83 18:42:08-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Solutions of the natural language analysis problem
Given the downhill trend of some contributions on natural language
analysis in this group, this is my last comment on the topic, and is
essentially an answer to Stan the leprechaun hacker (STLH for short).
I didn't "admit" that grammars only reflect some aspects of language.
(Using loaded verbs such as "admit" is not conducive to the best
quality of discussion.) I just STATED THE OBVIOUS. The equations of
motion only reflect SOME aspects of the material world, and yet no
engineer goes without them. I presented this point at greater length
in my earlier note, but the substantive presentation of method seems
to have gone unanswered. Incidentally, I worked for several years in a
civil engineering laboratory where ACTUAL dams and bridges were
designed, and I never saw there the preference for alchemy over
chemistry that STLH suggests is the necessary result of practical
concerns. Elegance and reproduciblity do not seem to be enemies of
generality in other scientific or engineering disciplines. Claiming
for AI an immunity from normal scientific standards (however flawed
...) is excellent support for our many detractors, who may just now be
on the deffensive because of media hype, but will surely come back to
the fray, with that weapon plus a long list of unfulfilled promises
and irreproducible "results."
Lack of rigor follows from lack of method. STLH tries to bludgeon us
with "generating *all* the possible meanings" of a sentence. Does he
mean ALL of the INFINITY of meanings a sentence has in general? Even
leaving aside model-theoretic considerations, we are all familiar with
he wanted me to believe P so he said P
he wanted me to believe not P so he said P because he thought
that I would think that he said P just for me to believe P
and not believe it
and so on ...
in spy stories.
The observation that "we need something that models human cognition
closely enough..." begs the question of what human cognition looks
like. (Silly me, it looks like STLH's program, of course.) STLH also
forgets that is often better for a conversation partner (whether man
or machine) to say "I don't understand" than to go on saying "yes,
yes, yes ..." and get it all wrong, as people (and machines) that are
trying to disguise their ignorance do.
It is indeed not surprising that "[his] problems are really concerned
with the acquisition of linguistic knowledge." Once every grammatical
framework is thrown out, it is extremely difficult to see how new
linguistic knowledge can be assimilated, whether automatically or even
by programming it in. As to the notion that "everyone is an expert on
the native language", it is similar to the claim that everyone with
working ears is an expert in acoustics.
As to "pernicious behavior", it would be better if STLH would first
put his own house in order: he seems to believe that to work at SRI
one needs to swear eternal hate to the "Schank camp" (whatever that
is); and useful criticism of other people's papers requires at least a
mention of the title and of the objections. A bit of that old battered
scientific protocol would help...
Fernando Pereira
------------------------------
Date: Tue, 30 Aug 1983 15:57 EDT
From: MONTALVO@MIT-OZ
Subject: intelligence is...
Date: 25 Aug 1983 1448-PDT
To: AIList at MIT-MC
From: Jay <JAY@USC-ECLC>
Subject: intelligence is...
An intelligence must have at least three abilities; To act; To
perceive, and classify (as one of: better, the same, worse) the
results of its actions, or the environment after the action; and
lastly To change its future actions in light of what it has
perceived, in attempt to maximize "goodness", and avoid "badness".
My views are very obviously flavored by behaviorism.
Where do you suppose the evolutionary cutoff is for intelligence? By
this definition a Planaria (flatworm) is intelligent. It can learn a
simple Y maze.
I basically like this definition of intelligence but I think the
learning part lends itself to many degrees of complexity, and
therefore, the definition leads to many degrees of intelligence.
Maybe that's ok. I would like to see an analysis (probably NOT on
AIList, althought maybe some short speculation might be appropriate)
of the levels of complexity that a learner could have. For example,
one with a representation of the agent's action would be more
complicated (therefore, more intelligent) than one without. Probably
a Planaria has no representation of it's actions, only of the results
of its actions.
------------------------------
End of AIList Digest
********************
∂02-Sep-83 1625 SCHMIDT@SUMEX-AIM LMI Window System Manual
Received: from SUMEX-AIM by SU-AI with PUP; 02-Sep-83 16:22 PDT
Date: Fri 2 Sep 83 16:24:43-PDT
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM>
Subject: LMI Window System Manual
To: HPP-Lisp-Machines@SUMEX-AIM
At AAAI one of the LMI folks was kind enough to give me a copy of
the July '83 window system manual. It looks as though it applies to the LM-2
as well, so I have put it in the LM-2 room, where more people might make use
of it.
--Christopher
-------
∂03-Sep-83 0016 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #23
Received: from SU-SCORE by SU-AI with TCP/SMTP; 3 Sep 83 00:16:50 PDT
Date: Friday, September 2, 1983 4:34AM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #23
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Saturday, 3 Sep 1983 Volume 1 : Issue 23
Today's Topics:
Implementations - Prolog in Lisp & Poplog,
Recreation - Puzzle,
Announcement - LP Symposium Reminder
----------------------------------------------------------------------
Date: Thursday, 1 September 1983 12:14:59 EDT
From: Brad.Allen@CMU-RI-ISL1
Subject: Lisp Based Prolog
I would like to voice disagreement with Fernando Pereira's
implication that Lisp Based Prologs are good only for
pedagogical purposes. The flipside of efficiency is usability,
and until there are Prolog systems with exploratory
programming environments which exhibit the same features as,
say Interlisp-D or Symbolics machines, there will be a place
for Lisp Based Prologs which can use such features as, E.g.,
bitmap graphics and calls to packages in other languages.
Lisp Based Prologs can fill the void between now and the
point when software accumulation in standard Prolog has caught
up to that of Lisp ( if it ever does ).
------------------------------
Date: 22 Aug 1983 1202-PDT
From: Firschein at SRI-AI
Subject: Other U.S. Installations of Poplog
Lockheed Palo Alto Research Labs has recently acquired the Poplog
system, which I had learned about from Steve Hardy. Steve's excellent
description of this system ( Prolog Digest 14-June-83, Vol 1, #10 )
does not adequately convey the sheer fun and convenience of using it.
The full screen editor ( Ved ) alone is worth the modest price
( reported to currently be $10,000 { cough -ed }) of the whole package.
The Pop11 language, which the entire system is written in, is very
powerful, and I am still exploring its potential. Finally, I have
never seen so much documentation - all of it on line.
I am very pleased with this system, and am interested in knowing
of other installations of Poplog in the United States. Please
respond either through the Prolog Digest or by mail at the following
address:
Vincent Pecora
Lockheed Research Labs
3251 Hanover St.
Palo Alto, CA 94304
------------------------------
Date: 24 August 1983 1536-PDT (Wednesday)
From: Foonberg at AEROSPACE (Alan Foonberg)
Subject: Another Puzzle
I was glancing at an old copy of Games magazine and came across the
following puzzle:
Can you find a ten digit number such that its left-most digit tells
how many zeroes there are in the number, its second digit tells how
many ones there are, etc.?
For example, 6210001000. There are 6 zeroes, 2 ones, 1 two, no
threes, etc. I'd be interested to see any efficient solutions to
this fairly simple problem. Can you derive all such numbers, not
only ten-digit numbers? Feel free to make your own extensions to
this problem.
Alan
------------------------------
From: David Warren <Warren@SRI-AI>
Subject: LP Symposium Submissions Reminder
This is a reminder that the September 1 deadline for submissions to
the IEEE Logic Programming Symposium, to be held in Atlantic City, New
Jersey, February 6-9, 1984, has now all but arrived. If you are
planning to submit a paper, you are urged to do so without further
delay. Send ten double-spaced copies to the Technical Chairman:
Doug DeGroot, IBM Watson Research Center
PO Box 218, Yorktown Heights, NY 10598
------------------------------
End of PROLOG Digest
********************
∂04-Sep-83 2246 @SU-SCORE.ARPA:reid@Glacier public picking on fellow faculty members
Received: from SU-SCORE by SU-AI with TCP/SMTP; 4 Sep 83 22:46:28 PDT
Received: from Glacier by SU-SCORE.ARPA with TCP; Sun 4 Sep 83 22:46:41-PDT
Date: Sunday, 4 September 1983 22:51:28-PDT
To: Floyd@Sail, Cheriton@Diablo
Cc: Faculty@Score
Subject: public picking on fellow faculty members
In-Reply-To: Your message of 29 Aug 83 1449 PDT.
From: Brian Reid <reid@Glacier>
Bob,
Since you launched a quickie salvo at David, and he seems not to have
responded in kind, I thought it appropriate to respond in kind. Not
only is David my friend, but he's right and you are wrong.
I was present at Kennedy's speech, sitting 20 feet from him and
watching his face. I was sitting with the faculty on the stage at
graduation. It was obvious to me at the time, and equally obvious to
the 500 guffawing faculty members around me, that Kennedy meant that
venture capital and software design were professions of low repute,
something that Stanford students should certainly strive to do better
than.
Kennedy is too smart to say anything unambiguously insulting to us
software designers, or even to you theorists, and you seem to have
fallen for it. He got the laugh he wanted out of his audience by using
the academic equivalent of a Polish joke, and he got the protection he
wanted by having people like you pick over the transcript of his speech
and convince themselves that he didn't really mean to be that
insulting. This sort of skill is very valuable in a college president.
The whole issue is moot now; we aren't mad any more; we have gone back
into our cages like good engineers and gotten back to our research
work. The university will go on, our work will go on, and we will
continue to be isolated from the mainstream of university life and
politics primarily because most of us don't care enough about it to get
involved, except when something like this rattles our cages. Let's just
drop it.
∂06-Sep-83 0020 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #24
Received: from SU-SCORE by SU-AI with TCP/SMTP; 6 Sep 83 00:19:56 PDT
Date: Monday, September 5, 1983 8:48AM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #24
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Tuesday, 6 Sep 1983 Volume 1 : Issue 24
Today's Topics:
Implementations - Prolog in Lisp
----------------------------------------------------------------------
Date: Sat 3 Sep 83 10:51:22-PDT
From: Pereira@SRI-AI
Subject: Prolog in Lisp
Relying on ( inferior ) Prologs in Lisp is the best way of not
contributing to Prolog software accumulation. The large number of
tools that have been built at Edinburgh show the advantages for the
whole Prolog community of sites 100% committed to building everything
in Prolog. By far the best debugging environment for Prolog programs
in use today is the one on the DEC-10/20 system, and that is written
entirely in Prolog. Its operation is very different, and much superior
for Prolog purposes, than all Prolog debuggers built on top of Lisp
debuggers that I have seen to date. Furthermore, integrating things
like screen management into a Prolog environment in a graceful way is
a challenging problem ( think of how long it took until flavors came
up as the way of building the graphics facilities on the MIT Lisp
machines ), which will also advance our understanding of computer
graphics ( I have written a paper on the subject, "Can drawing be
liberated from the von Neumann style?" ).
I am not saying that Prologs in Lisp are not to be used ( I use one
myself on the Symbolics Lisp machines ), but that a large number of
conceptual and language advances will be lost if we don't try to see
environmental tools in the light of logic programming.
-- Fernando Pereira
------------------------------
Date: Mon, 5 Sep 1983 03:39 EDT
From: Ken%MIT-OZ@MIT-MC
In Pereira's introduction to Foolog and my toy interpreter he says:
However, such simple interpreters ( even the
Abelson and Sussman one which is far better than
PiL ) are not a sufficient basis for the claim
that "it is easy extend Lisp to do what Prolog
does." What Prolog "does" is not just to make
certain deductions in a certain order, but also
make them very fast. Unfortunately, all Prologs in
Lisp I know of fail in this crucial aspect ( by
factors between 30 and 1000 ).
I never claim for my little interpreter that it was more than a toy.
It primary value is pedagogic in that it makes the operational
semantics of the pure part of Prolog clear. Regarding Foolog, I would
defend it in that it is relatively complete;
-- it contains cut, bagof, call, etc. and for i/o and arithmetic his
primitive called "lisp" is adequate. In the introduction he claims
that its 75% of the speed of the Dec 10/20 Prolog interpreter. If
that makes it a toy then all but 2 or 3 Prolog implementations are
non-toy.
[Comment: I agree with Fernando Pereira and Ken that there are lots
and again lots of horribly slow Prologs floating around. But I do not
think that it is impossible to write a fast one in Lisp, even on a
standard computer. One of the latest versions of the Foolog
interpreters is actually slightly faster than Dec-10 Prolog when
measuring LIPS. The Foolog compiler I am working on compiled
naive-reverse to half the speed of compiled Dec-10 Prolog ( including
mode declarations ). The compiler opencodes unification, optimizes
tail recursion and uses determinism, and the code fits in about three
pages ( all of it is in Prolog, of course ). -- Martin Nilsson]
I tend to agree that too many claims are made for "one day wonders".
Just because I can implement most of Prolog in one day in Lisp doesn't
mean that the implentation is any good. I know because I started
almost two years ago with a very tiny implementation of Prolog in
Lisp. As I started to use it for serious applications it grew to the
point where today its up to hundreds of pages of code ( the entire
source code for the system comes to 230 Tops20 pages ). The Prolog
runs on Lisp Machines ( so we call it LM-Prolog ). Mats Carlsson here
in Uppsala wrote a compiler for it and it is a serious implementation.
It runs naive reverse of a list 30 long on a CADR in less than 80
milliseconds (about 6250 Lips). Lambdas and 3600s typically run from
2 to 5 times faster than Cadrs so you can guess how fast it'll run.
Not only is LM-Prolog fast but it incorporates many important
innovations. It exploits the very rich programming environment of
Lisp Machines. The following is a short list of its features
User Extensible Interpreter
Extensible unification for implementing
E.g. parallelism and constraints
Optimizing Compiler
Open compilation Tail recursion removal and
automatic detection of determinacy Compiled
unification with microcoded runtime support
Efficient bi-directional interface to Lisp
Database Features
User controlled indexing Multiple databases
(Worlds)
Control Features
Efficient conditionals Demand-driven
computation of sets and bags
Access To Lisp Machine
Features Full programming environment, Zwei
editor, menus, windows, processes, networks,
arithmetic ( arbitrary precision, floating,
rational and complex numbers, strings,
arrays, I/O streams )
Language Features
Optional occur check Handling of cyclic
structures Arbitrary parity
Compatability Package
Automatic translation from DEC-10 Prolog
to LM-Prolog
Performance
Compiled code up to 6250 LIPS on a CADR
Interpreted code; up to 500 LIPS
Availability
LM-Prolog currently runs on LMI CADRs
and Symbolics LM-2s. Soon to run on
Lambdas.
Commercially Available Soon.
For more information contact
Kenneth M. Kahn or Mats Carlsson.
Inquires can be directed to:
KEN@MIT-OZ or
UPMAIL P. O. Box 2059
S-75002
Uppsala, Sweden
Phone +46-18-111925
------------------------------
End of PROLOG Digest
********************
∂06-Sep-83 0630 REGES@SU-SCORE.ARPA The new CS 105 A & B
Received: from SU-SCORE by SU-AI with TCP/SMTP; 6 Sep 83 06:30:34 PDT
Date: Tue 6 Sep 83 06:31:10-PDT
From: Stuart Reges <REGES@SU-SCORE.ARPA>
Subject: The new CS 105 A & B
To: faculty@SU-SCORE.ARPA
Office: Margaret Jacks 260, 497-9798
The Curriculum Committee last year decided to split the old CS 105 into a
two-quarter sequence that would teach both computer programming and general
issues about computers and computing. The original idea was something like the
combination of CS 105 & 101 spread more evenly over two quarters. The idea of
the course has changed quite a bit since then, however. CS 101 will continue to
provide a unique educational opportunity.
I am putting together a syllabus for 105A since I will be teaching it in a few
weeks. I am also putting together a syllabus for 105B, mostly because I must
meet soon with Jim Adams. Some of you already know from a faculty lunch last
year that Jim will be deciding whether 105A or the combination of 105A and 105B
will satisfy the area 8 (Technology and Applied Science) distribution
requirement for undergraduates.
I am writing this note to solicit opinions. The course is experimental, and I
expect we will try many different things and abandon those that don't work out.
But I would appreciate some early input.
I plan to teach the same Pascal material taught in 106. That is about 60% of
the course. I also plan to teach some elementary LISP. In 105A I will start
with LISP in order to teach the concepts of functions (building up the set of
primitives), data types, predicates and simple list processing. In 105B I will
start with LISP in order to teach some formal logic, some advanced list
processing and recursion.
I also want to discuss the application of computers. I plan to use Stanford as
a case study examining how computers are used for administration by CSD, the
Registrar, GSB and the TIRO project. I also have lectures set aside for AI,
security, ergonomics, history and future trends, many of which I hope to have
given by guest lecturers.
I would greatly appreciate any initial opinions. I have a detailed syllabus for
both courses that I will show to any interested persons.
-------
∂07-Sep-83 0013 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #25
Received: from SU-SCORE by SU-AI with TCP/SMTP; 7 Sep 83 00:13:49 PDT
Date: Tuesday, September 6, 1983 4:09PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #25
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Wednesday, 7 Sep 1983 Volume 1 : Issue 25
Today's Topics:
Implementations - Misunderstanding
----------------------------------------------------------------------
Date: Tue 6 Sep 83 15:22:25-PDT
From: Pereira@SRI-AI
Subject: Misunderstanding
I'm sorry that my first note on Prologs in Lisp was construed as a
comment on Foolog, which appeared in the same Digest. In fact, my
note was send to the digest BEFORE I knew Ken was submitting Foolog.
Therefore, it was not a comment on Foolog. As to LM-Prolog, I have a
few comments about its speed:
1. It depends essentially on the use of Lisp machine subprimitives and
a microcoded unification, which are beyond Lisp the language and the
Lisp environment in all but the MIT Lisp machines. It LM-Prolog can
be considered as "a Prolog in Lisp," then DEC-10/20 Prolog is a Prolog
in Prolog ...
2. To achieve that speed in determinate computation requires mapping
Prolog procedure calls into Lisp function calls, which leaves
backtracking in the lurch. The version of LM-Prolog I know of used
stack group switches for bactracking, which is orders of magnitude
slower than backtracking on the DEC-20 system.
3. Code compactness is sacrificed by compiling from Prolog into Lisp
with open-coded unification. This is important because it makes worse
the paging behavior of large programs.
There are a lot of other issues in estimating the "real" efficiency of
Prolog systems, such as GC requirements and exact TRO discipline. For
example, using CONS space for runtime Prolog data structures is a
common technique that seems adequate when testing with naive reverse
of a 30 long list, but appears hopeless for programs that build
structure and backtrack a lot, because CONS space is not stack
allocated ( unless you use certain nonportable tricks, and even
then... ), and therefore is not reclaimed on backtracking ( one might
argue that Lisp programs for the same task have the same problem, but
efficient backtracking is precisely one of the major advantages of
good Prolog implementations ).
The current Lisp machines have exciting environment tools from which
Prolog users would like to benefit. I think that building Prolog
systems in Lisp will hit artificial performance and language barriers
much before the actual limits of the hardware employed are reached.
The approach I favor is to take the latest developments in Prolog
implementation and use them to build Prolog systems that coexist with
Lisp on those machines, but use all the hardware resources. I think
this is possible with a bit of cooperation from manufacturers, and I
have reasons to hope this will happen soon, and produce Prolog systems
with a performance far superior to DEC-20 Prolog.
Ken's approach may produce a tolerable system in the short term, but I
don't think it can ever reach the performance and functionality which
I think the new machines can deliver. Furthermore, there are big
differences between the requirements of experimental systems, with all
sorts of new goodies, and day-to-day systems that do the standard
things, but just much better. Ken's approach risks producing a system
that falls between these (conflicting) goals, leading to a much larger
implementation effort than is needed just for experimenting with
language extensions ( most of the time better done in Prolog ) or just
for a practical system.
-- Fernando Pereira
PS: For all it is worth, the source of DEC-20 Prolog is 177 pages of
Prolog and 139 of Macro-10 (at 1 instruction per line...). The system
comprises a full compiler, interpreter, debugger and run time system,
not using anything external besides operating system I/O calls. We
estimate it incorporates between 5 and 6 man years of effort.
According to Ken, LM-Prolog is 230 pages of Lisp and Prolog ...
------------------------------
End of PROLOG Digest
********************
∂09-Sep-83 1242 @MIT-MC:AUSTIN@DEC-MARLBORO DISTRIBUTION LIST MEMBERSHIP
Received: from MIT-MC by SU-AI with TCP/SMTP; 9 Sep 83 12:42:11 PDT
Date: 9 Sep 1983 1502-EDT
From: AUSTIN at DEC-MARLBORO
To: PHILOSOPHY-OF-SCIENCE at MIT-MC
Subject: DISTRIBUTION LIST MEMBERSHIP
Message-ID: <"MS10(2124)+GLXLIB1(1136)" 11950304304.15.647.4421 at DEC-MARLBORO>
PLEASE ADD MY NAME TO THIS DISTRIBUTION LIST.
MY NAME IS TOM AUSTIN AND MY NETWORK ADDRESS IS
AUSTIN@DEC-MARLBORO.
THANKS!
--------
∂09-Sep-83 1317 LAWS@SRI-AI.ARPA AIList Digest V1 #54
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Sep 83 13:16:53 PDT
Date: Friday, September 9, 1983 9:02AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #54
To: AIList@SRI-AI
AIList Digest Friday, 9 Sep 1983 Volume 1 : Issue 54
Today's Topics:
Robotics - Walking Robot,
Fifth Generation - Book Review Discussion,
Methodology - Rational Psychology,
Lisp Availability - T,
Prolog - Lisp Based Prolog, Foolog
----------------------------------------------------------------------
Date: Fri 2 Sep 83 19:24:59-PDT
From: John B. Nagle <NAGLE@SU-SCORE.ARPA>
Subject: Strong, agile robot
[Reprinted from the SCORE BBoard.]
There is a nice article in the current Robotics Age about an
outfit down in Anaheim (not Disney) that has built a six-legged robot
with six legs spaced radially around a circular core. Each leg has
three motors, and there are enough degrees of freedom in the system to
allow the robot to assume various postures such as a low, tucked one
for tight spots; a tall one for looking around, and a wide one for
unstable surfaces. As a demonstration, they had the robot climb into
the back of a pickup truck, climb out, and then lift up the truck by
the rear end and move the truck around by walking while lifting the
truck.4
It's not a heavy AI effort; this thing is a teleoperator
controlled by somebody with a joystick and some switches (although it
took considerable computer power to make it possible for one joystick
to control 18 motors in such a way that the robot can walk faster than
most people). Still, it begins to look like walking machines are
finally getting to the point where they are good for something. This
thing is about human sized and can lift 900 pounds; few people can do
that.
------------------------------
Date: 3 Sep 83 12:19:49-PDT (Sat)
From: harpo!eagle!mhuxt!mhuxh!mhuxr!mhuxv!akgua!emory!gatech!pwh@Ucb-Vax
Subject: Re: Fifth Generation (Book Review)
Article-I.D.: gatech.846
In response to Richard Treitel's comments about the Fifth Generation
book review recently posted:
*This* turkey, for one, has not heard of the "Alvey report."
Do tell...
I believe that part of your disagreement with the book reviewer stems
from the fact that you seem to be addressing different audiences. He,
a concerned but ignorant lay-audience; you, the AI Intelligensia on
the net.
phil hutto
CSNET pwh@gatech
INTERNET pwh.gatech@udel-relay
UUCP ...!{allegra, sb1, ut-ngp, duke!mcnc!msdc}!gatech!pwh
p.s. - Please do elaborate on the Alvey Report. Sounds fascinating.
------------------------------
Date: Tue 6 Sep 83 14:24:28-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Re: Fifth Generation (Book Review)
Phil,
I wish I were in a position to elaborate on the Alvey Report. Here's
all I know, as relayed by a friend of mine who is working back in
Britain:
As a response to either (i) the challenge/promise of the Information
Era or (ii) the announcement of a major Japanese effort to develop AI
systems, Mrs. Thatcher's government commissioned a Commission,
chaired by some guy named Alvey about whom I don't know anything
(though I suspect he is an academic of some stature, else he wouldn't
have been given the job). The mission of this Commission (or it may
have been a Committee) was to produce recommendations for national
policy, to be implemented probably by the Science and Engineering
Research Council. They found that while a few British universities
are doing quite good computer science, only one of them is doing AI
worth mentioning, namely Edinburgh, and even there, not too much of
it. (The reason for this is that an earlier Government commissioned
another Report on AI, which was written by Professor Sir James
Lighthill, an academic of some stature. Unfortunately he is a
mathematician specialising in fluid dynamics -- said to have designed
Concorde's wings, or some such -- and he concluded that the only bit
of decent work that had been done in AI to date was Terry Wingorad's
thesis (just out) and that the field showed very little promise. As a
result of the Lighthill Report, AI was virtually a dirty word in
Britain for ten years. Most people still think it means artificial
insemination.) Alvey's group also found, what anyone could have told
the Government, that research on all sorts of advanced science and
technology was disgracefully stunted. So they recommended that a few
hundred million pounds of state and industrial funds be pumped into
research and education in AI, CS, and supporting fields. This
happened about a year ago, and the Gov't basically bought the whole
thing, with the result that certain segments of the academic job
market over there went straight from famine to feast (the reverse
change will occur pretty soon, I doubt not). It kind of remains to be
seen what industry will do, since we don't have a MITI.
I partly accept your criticism of my criticism of that review, but I
also believe that a journalist has an obligation not to publish
falsehoods, even if they are generally believed, and to do more than
re-hash the output of his colleagues into a form consistent with the
demands of the story he is "writing".
- Richard
------------------------------
Date: Sat 3 Sep 83 13:28:36-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Rational Psychology
I've just read Jon Doyle's paper "Rational Psychology" in the latest
AI Magazine. It's one of those papers you wish you (I wish) had
written it yourself. The paper shows implictly what is wrong with many
of the arguments in discussions on intelligence and language analysis
in this group. I am posting this as a starting shot in what I would
like to be a rational discussion of methodology. Any takers?
Fernando Pereira
PS. I have been a long-time fan of Truesdell's rational mechanics and
thermodynamics (being a victim of "black art" physics courses). Jon
Doyle's emphasis on Truesdell's methodology is for me particularly
welcome.
[The article in question is rather short, more of an inspirational
pep talk than a guide to the field. Could someone submit one
"rational argument" or other exemplar of the approach? Since I am
not familiar with the texts that Doyle cites, I am unable to discern
what he and Fernando would like us to discuss or how they would have
us go about it. -- KIL]
------------------------------
Date: 2 Sep 1983 11:26-PDT
From: Andy Cromarty <andy@aids-unix>
Subject: Availability of T
Yale has not yet decided on the means by which it will distribute
T to for-profit institutions, but it has been negotiating with a
few companies, including Cognitive Systems, Inc. To my knowledge
no final agreements have been signed, so right now, no one can sell
it. ...We do not want a high price tag to inhibit availability.
-- Jonathan Rees, T Project (REES@YALE) 31-Aug-83
About two days before you sent this to the digest, I received a
14-page T licensing agreement from Yale University's "Office of
Cooperative Research".
Prices ranged from $1K for an Apollo to $5K for a VAX 11/780 for
government contractors (e.g. us), with no software support or
technical assistance. The agreement does not actually say that
sources are provided, although that is implied in several places. A
rather murky trade secret clause was included in the contract.
It thus appears that T is already being marketed. These cost figures,
however, are approaching Scribe territory. Considering (a) the cost
of $5K per VAX CPU, (b) the wide variety of alternative LISPs
available for the VAX, and (c) the relatively small base of existing T
(or Scheme) software, perhaps Yale does "want a high price tag to
inhibit availability" after all....
asc
------------------------------
Date: Thursday, 1 September 1983 12:14:59 EDT
From: Brad.Allen@CMU-RI-ISL1
Subject: Lisp Based Prolog
[Reprinted from the Prolog Digest.]
I would like to voice disagreement with Fernando Pereira's implication
that Lisp Based Prologs are good only for pedagogical purposes. The
flipside of efficiency is usability, and until there are Prolog
systems with exploratory programming environments which exhibit the
same features as, say Interlisp-D or Symbolics machines, there will be
a place for Lisp Based Prologs which can use such features as, E.g.,
bitmap graphics and calls to packages in other languages. Lisp Based
Prologs can fill the void between now and the point when software
accumulation in standard Prolog has caught up to that of Lisp ( if it
ever does ).
------------------------------
Date: Sat 3 Sep 83 10:51:22-PDT
From: Pereira@SRI-AI
Subject: Prolog in Lisp
[Reprinted from the Prolog Digest.]
Relying on ( inferior ) Prologs in Lisp is the best way of not
contributing to Prolog software accumulation. The large number of
tools that have been built at Edinburgh show the advantages for the
whole Prolog community of sites 100% committed to building everything
in Prolog. By far the best debugging environment for Prolog programs
in use today is the one on the DEC-10/20 system, and that is written
entirely in Prolog. Its operation is very different, and much superior
for Prolog purposes, than all Prolog debuggers built on top of Lisp
debuggers that I have seen to date. Furthermore, integrating things
like screen management into a Prolog environment in a graceful way is
a challenging problem ( think of how long it took until flavors came
up as the way of building the graphics facilities on the MIT Lisp
machines ), which will also advance our understanding of computer
graphics ( I have written a paper on the subject, "Can drawing be
liberated from the von Neumann style?" ).
I am not saying that Prologs in Lisp are not to be used ( I use one
myself on the Symbolics Lisp machines ), but that a large number of
conceptual and language advances will be lost if we don't try to see
environmental tools in the light of logic programming.
-- Fernando Pereira
------------------------------
Date: Mon, 5 Sep 1983 03:39 EDT
From: Ken%MIT-OZ@MIT-MC
Subject: Foolog
[Reprinted from the Prolog Digest.]
In Pereira's introduction to Foolog [a misunderstanding; see the next
article -- KIL] and my toy interpreter he says:
However, such simple interpreters ( even the
Abelson and Sussman one which is far better than
PiL ) are not a sufficient basis for the claim
that "it is easy extend Lisp to do what Prolog
does." What Prolog "does" is not just to make
certain deductions in a certain order, but also
make them very fast. Unfortunately, all Prologs in
Lisp I know of fail in this crucial aspect ( by
factors between 30 and 1000 ).
I never claim for my little interpreter that it was more than a toy.
It primary value is pedagogic in that it makes the operational
semantics of the pure part of Prolog clear. Regarding Foolog, I
would defend it in that it is relatively complete;
-- it contains cut, bagof, call, etc. and for i/o and arithmetic his
primitive called "lisp" is adequate. In the introduction he claims
that its 75% of the speed of the Dec 10/20 Prolog interpreter. If
that makes it a toy then all but 2 or 3 Prolog implementations are
non-toy.
[Comment: I agree with Fernando Pereira and Ken that there are lots
and again lots of horribly slow Prologs floating around. But I do not
think that it is impossible to write a fast one in Lisp, even on a
standard computer. One of the latest versions of the Foolog
interpreters is actually slightly faster than Dec-10 Prolog when
measuring LIPS. The Foolog compiler I am working on compiled
naive-reverse to half the speed of compiled Dec-10 Prolog ( including
mode declarations ). The compiler opencodes unification, optimizes
tail recursion and uses determinism, and the code fits in about three
pages ( all of it is in Prolog, of course ). -- Martin Nilsson]
I tend to agree that too many claims are made for "one day wonders".
Just because I can implement most of Prolog in one day in Lisp
doesn't mean that the implentation is any good. I know because I
started almost two years ago with a very tiny implementation of
Prolog in Lisp. As I started to use it for serious applications it
grew to the point where today its up to hundreds of pages of code (
the entire source code for the system comes to 230 Tops20 pages ).
The Prolog runs on Lisp Machines ( so we call it LM-Prolog ). Mats
Carlsson here in Uppsala wrote a compiler for it and it is a serious
implementation. It runs naive reverse of a list 30 long on a CADR in
less than 80 milliseconds (about 6250 Lips). Lambdas and 3600s
typically run from 2 to 5 times faster than Cadrs so you can guess
how fast it'll run.
Not only is LM-Prolog fast but it incorporates many important
innovations. It exploits the very rich programming environment of
Lisp Machines. The following is a short list of its features:
User Extensible Interpreter
Extensible unification for implementing
E.g. parallelism and constraints
Optimizing Compiler
Open compilation Tail recursion removal and
automatic detection of determinacy Compiled
unification with microcoded runtime support
Efficient bi-directional interface to Lisp
Database Features
User controlled indexing Multiple databases
(Worlds)
Control Features
Efficient conditionals Demand-driven
computation of sets and bags
Access To Lisp Machine
Features Full programming environment, Zwei
editor, menus, windows, processes, networks,
arithmetic ( arbitrary precision, floating,
rational and complex numbers, strings,
arrays, I/O streams )
Language Features
Optional occur check Handling of cyclic
structures Arbitrary parity
Compatability Package
Automatic translation from DEC-10 Prolog
to LM-Prolog
Performance
Compiled code up to 6250 LIPS on a CADR
Interpreted code; up to 500 LIPS
Availability
LM-Prolog currently runs on LMI CADRs
and Symbolics LM-2s. Soon to run on
Lambdas.
Commercially Available Soon.
For more information contact
Kenneth M. Kahn or Mats Carlsson.
Inquires can be directed to:
KEN@MIT-OZ or
UPMAIL P. O. Box 2059
S-75002
Uppsala, Sweden
Phone +46-18-111925
------------------------------
Date: Tue 6 Sep 83 15:22:25-PDT
From: Pereira@SRI-AI
Subject: Misunderstanding
[Reprinted from the PROLOG Digest.]
I'm sorry that my first note on Prologs in Lisp was construed as a
comment on Foolog, which appeared in the same Digest. In fact, my
note was send to the digest BEFORE I knew Ken was submitting Foolog.
Therefore, it was not a comment on Foolog. As to LM-Prolog, I have a
few comments about its speed:
1. It depends essentially on the use of Lisp machine subprimitives and
a microcoded unification, which are beyond Lisp the language and the
Lisp environment in all but the MIT Lisp machines. It LM-Prolog can
be considered as "a Prolog in Lisp," then DEC-10/20 Prolog is a Prolog
in Prolog ...
2. To achieve that speed in determinate computation requires mapping
Prolog procedure calls into Lisp function calls, which leaves
backtracking in the lurch. The version of LM-Prolog I know of used
stack group switches for bactracking, which is orders of magnitude
slower than backtracking on the DEC-20 system.
3. Code compactness is sacrificed by compiling from Prolog into Lisp
with open-coded unification. This is important because it makes worse
the paging behavior of large programs.
There are a lot of other issues in estimating the "real" efficiency of
Prolog systems, such as GC requirements and exact TRO discipline. For
example, using CONS space for runtime Prolog data structures is a
common technique that seems adequate when testing with naive reverse
of a 30 long list, but appears hopeless for programs that build
structure and backtrack a lot, because CONS space is not stack
allocated ( unless you use certain nonportable tricks, and even
then... ), and therefore is not reclaimed on backtracking ( one might
argue that Lisp programs for the same task have the same problem, but
efficient backtracking is precisely one of the major advantages of
good Prolog implementations ).
The current Lisp machines have exciting environment tools from which
Prolog users would like to benefit. I think that building Prolog
systems in Lisp will hit artificial performance and language barriers
much before the actual limits of the hardware employed are reached.
The approach I favor is to take the latest developments in Prolog
implementation and use them to build Prolog systems that coexist with
Lisp on those machines, but use all the hardware resources. I think
this is possible with a bit of cooperation from manufacturers, and I
have reasons to hope this will happen soon, and produce Prolog systems
with a performance far superior to DEC-20 Prolog.
Ken's approach may produce a tolerable system in the short term, but I
don't think it can ever reach the performance and functionality which
I think the new machines can deliver. Furthermore, there are big
differences between the requirements of experimental systems, with all
sorts of new goodies, and day-to-day systems that do the standard
things, but just much better. Ken's approach risks producing a system
that falls between these (conflicting) goals, leading to a much larger
implementation effort than is needed just for experimenting with
language extensions ( most of the time better done in Prolog ) or just
for a practical system.
-- Fernando Pereira
PS: For all it is worth, the source of DEC-20 Prolog is 177 pages of
Prolog and 139 of Macro-10 (at 1 instruction per line...). The system
comprises a full compiler, interpreter, debugger and run time system,
not using anything external besides operating system I/O calls. We
estimate it incorporates between 5 and 6 man years of effort.
According to Ken, LM-Prolog is 230 pages of Lisp and Prolog ...
------------------------------
End of AIList Digest
********************
∂09-Sep-83 1628 LAWS@SRI-AI.ARPA AIList Digest V1 #55
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Sep 83 16:27:56 PDT
Date: Friday, September 9, 1983 12:29PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #55
To: AIList@SRI-AI
AIList Digest Saturday, 10 Sep 1983 Volume 1 : Issue 55
Today's Topics:
Intelligence - Turing Test & Definitions,
AI Environments - Computing Power & Social Systems
----------------------------------------------------------------------
Date: Saturday, 3 Sep 1983 13:57-PDT
From: bankes@rand-unix
Subject: Turing Tests and Definitions of Intelligence
As much as I dislike adding one more opinion to an overworked topic, I
feel compelled to make a comment on the ongoing discussion of the
Turing test. It seems to me quite clear that the Turing test serves
as a tool for philosopical argument and not as a defining criterion.
It serves the purpose of enlightening those who would assert the
impossibility of any machine ever being intelligent. The point is, if
a machine which would pass the test could be produced, then a person
would have either to admit it to be intelligent or else accept that
his definition of intelligence is something which cannot be perceived
or tested.
However, when the Turing test is used as a tool with which to think
about "What is intelligence?" it leads primarily to insights into the
psychology and politics of what people will accept as intelligent.
(This is a consequence of the democratic definition - its intelligent
if everybody agrees it is). Hence, we get all sorts of distractions:
Must an intelligent machine make mistakes, should an intelligent
machine have emotions, and most recently would an intelligent machine
be prejudiced? All of this deals with a sociological viewpoint on
what is intelligent, and gets us no closer to a fundamental
understanding of the phenomenon.
Intelligence is an old word, like virtue and honor. It may well be
that the progress of our understanding will make it obsolete, the word
may come to suggest the illusions of an earlier time. Certainly, it
is much more complex than our language patterns allow. The Turing
test suggests it to be a boolean, you got it or you don't. We
commonly use smart as a relational, you're smarter than me but we're
both smarter than rover. This suggests intelligence is a scaler,
hence IQ tests. But recent experience with IQ testing across cultures
together with the data from comparative psychology, would suggest that
intelligence is at least multi-dimensional. Burrowing animals on the
whole do better at mazes than others. Animals whose primary defense
is flight respond differently to aversive conditioning than do more
aggressive species.
We may have seen a recapitulation of this in the last twenty years
experience with AI. We have moved from looking for the philosophers
stone, the single thing needed to make something intelligent, to
knowledge based systems. No one would reasonably discuss (I think)
whether my program is smarter than yours. But we might be able to say
that mine knows more about medicine than yours or that mine has more
capacity for discovering new relations of a specified type.
Thus I would suggest that the word intelligence (noun that it is,
suggesting a thing which might somehow be gotten ahold of) should be
used with caution. And that the Turing test, as influential as it has
been, may have outlived its usefulness, at least for discussions among
the faithful.
-Steve Bankes
RAND
------------------------------
Date: Sat, 3 Sep 83 17:07:33 EDT
From: "John B. Black" <Black@YALE.ARPA>
Subject: Learning Complexity
There was recently a query on AIList about how to characterize
learning complexity (and saying that may be the crucial issue in
intelligence). Actually, I have been thinking about this recently, so
I thought I would comment. One way to characterize the learning
complexity of procedural skills is in terms of what kind of production
system is needed to perform the skill. For example, the kind of
things a slug or crayfish (currently popular species in biopsychology)
can do seem characterizable by production systems with minimal
internal memory, conditions that are simple external states of the
world, and actions that are direct physical actions (this is
stimulus-response psychology in a nutshell). However, human skills
(progamming computers, doing geometry, etc.) need much more complex
production systems with complex networks as internal memories,
conditions that include variables, and actions that are mental in
addition to direct physical actions. Of course, what form productions
would have to be to exhibit human-level intelligence (if indeed, they
can) is an open question and a very active field of research.
------------------------------
Date: 5 Sep 83 09:42:44 PDT (Mon)
From: woodson%UCBERNIE@Berkeley (Chas Woodson)
Subject: AI and computing power
Can you direct me to some wise comments on the following question?
Is the progress of AI being held up by lack of computing power?
[Reply follows. -- KIL]
There was a discussion of this on Human-Nets a year ago.
I am reprinting some of the discussion below.
My own feeling is that we are not being held back. If we had
infinite compute power tomorrow, we would not know how to use it.
Others take the opposite view: that intelligence may be brute force
search, massive theorem proving, or large rule bases and that we are
shying away from the true solutions because we want a quick finesse.
There is also a view that some problems (e.g. vision) may require
parallel solutions, as opposed to parallel speedup of iterative
solutions.
The AI principal investigators seem to feel (see the Fall AI Magazine)
that it would be enough if each AI investigator had a Lisp Machine
or equivalent funding. I would extend that a little further. I think
that the biggest bottleneck right now is the lack of support staff --
systems wizards, apprentice programmers, program librarians, software
editors (i.e., people who edit other people's code), evaluators,
integrators, documentors, etc. Could Lukas have made Star Wars
without a team of subordinate experts? We need to free our AI
gurus from the day-to-day trivia of coding and system building just
as we use secretaries and office machines to free our management
personnel from administrative trivia. We need to move AI from the
lone inventor stage to the industrial laboratory stage. This is a
matter of social systems rather than hardware.
-- Ken Laws
------------------------------
Date: Tuesday, 12 October 1982 13:50-EDT
From: AGRE at MIT-MC
Subject: artificial intelligence and computer architecture
[Reprinted from HUMAN-NETS Digest, 16 Oct 1982, Vol. 5, No. 96]
A couple of observations on the theory that AI is being held back by
the sorry state of computer architecture.
First, there are three projects that I know of in this country that
are explicitly trying to deal with the problem. They are Danny
Hillis' Connection Machine project at MIT, Scott Fahlman's NETL
machine at CMU, and the NON-VON project at Columbia (I can't
remember who's doing that one right offhand).
Second, the associative memory fad came and went very many years
ago. The problem, simply put, is that human memory is a more
complicated place than even the hairiest associative memory chip.
The projects I have just mentioned were all first meant as much more
sophisticated approaches to "memory architectures", though they have
become more than that since.
Third, it is quite important to distinguish between computer
architectures and computational concepts. The former will always
lag ten years behind the latter. In fact, although our computer
architectures are just now beginning to pull convincingly out of the
von Neumann trap, the virtual machines that our computer languages
run on haven't been in the von Neumann style for a long time. Think
of object-oriented programming or semantic network models or
constraint languages or "streams" or "actors" or "simulation" ideas
as old as Simula and VDL. True these are implemented on serial
machines, but they evoke conceptions of computation more closer to
our ideas about how the physical world works, with notions of causal
locality and data flow and asynchronous communication quite
analogous to those of physics; one uses these languages properly not
by thinking of serial computers but by thinking in these more
general terms. These are the stuff of everyday programming, at
least among the avant garde in the AI labs.
None of this is to say that AI's salvation isn't in computer
architecture. But it is to say that the process of freeing
ourselves from the technology of the 40's is well under weigh.
(Yes, I know, hubris.) - phiL
------------------------------
Date: 13 Oct 1982 08:34 PDT
From: DMRussell at PARC-MAXC
Subject: AI and alternative architectures
[Reprinted from HUMAN-NETS Digest, 16 Oct 1982, Vol. 5, No. 96]
There is a whole subfield of AI growing up around parallel
processing models of computation. It is characterized by the use of
massive compute engines (or models thereof) and a corresponding
disregard for efficiency concerns. (Why not, when you've got n↑n
processors?)
"Parallel AI" is a result of a crossing of interests from neural
modelling, parallel systems theory, and straightforward AI.
Currently, the most interesting work has been done in vision --
where the transformation from pixel data to more abstract
representations (e.g. edges, surfaces or 2.5-D data) via parallel
processing is pretty easy. There has been rather less success in
other, not-so-obviously parallel, fields.
Some work that is being done:
Jerry Feldman & Dana Ballard (University of Rochester)
-- neural modelling, vision
Steve Small, Gary Cottrell, Lokendra Shastri (University of Rochester)
-- parallel word sense and sentence parsing
Scott Fahlman (CMU) -- knowledge rep in a parallel world
??? (CMU) -- distributed sensor net people
Geoff Hinton (UC San Diego?) -- vision
Daniel Sabbah (IBM) -- vision
Rumelhart (UC San Diego) -- motor control
Carl Hewitt, Bill Kornfeld (MIT) -- problem solving
(not a complete list -- just a hint)
The major concerns of these people has been controlling the parallel
beasts they've created. Basically, each of the systems accepts data
at one end, and then munges the data and various hypotheses about
the data until the entire system settles down to a single
interpretation. It is all very messy, and incredibly difficult to
prove anything. (e.g. Under what conditions will this system
converge?)
The obvious question is this: What does all of this alternative
architecture business buy you? So far, I think it's an open
question. Suggestions?
-- DMR --
------------------------------
Date: 13 Oct 1982 1120-PDT
From: LAWS at SRI-AI
Subject: [LAWS at SRI-AI: AI Architecture]
[Reprinted from HUMAN-NETS Digest, 16 Oct 1982, Vol. 5, No. 96]
In response to Glasser @LLL-MFE:
I doubt that new classes of computer architecture will be the
solution to building artificial intelligence. Certainly we could
use more powerful CPUs, and the new generation of LISP machines make
practical approaches that were merely feasibility demonstrations
before. The fact remains that if we don't have the algorithms for
doing something with current hardware, we still won't be able to do
it with faster or more powerful hardware.
Associative memories have been built in both hardware and software.
See, for example, the LEAP language that was incorporated into the
SAIL language. (MAINSAIL, an impressive offspring of SAIL, has
abandoned this approach in favor of subroutines for hash table
maintenance.) Hardware is also being built for data flow languages,
applicative languages, parallel processing, etc. To some extent
these efforts change our way of thinking about problems, but for the
most part they only speed up what we knew how to do already.
For further speculation about what we would do with "massively
parallel architectures" if we ever got them, I suggest the recent
papers by Dana Ballard and Geoffrey Hinton, e.g. in the Aug. ['82]
AAAI conference proceedings [...]. My own belief is that the "missing
link" to AI is a lot of deep thought and hard work, followed by VLSI
implementation of algorithms that have (probably) been tested using
conventional software running on conventional architectures. To be
more specific we would have to choose a particular domain since
different areas of AI require different solutions.
Much recent work has focused on the representation of knowledge in
various domains: representation is a prerequisite to acquisition and
manipulation. Dr. Lenat has done some very interesting work on a
program that modifies its own representations as it analyzes its own
behavior. There are other examples of programs that learn from
experience. If we can master knowledge representation and learning,
we can begin to get away from programming by full analysis of every
part of every algorithm needed for every task in a domain. That
would speed up our progress more than new architectures.
[...]
-- Ken Laws
------------------------------
End of AIList Digest
********************
∂09-Sep-83 1728 LAWS@SRI-AI.ARPA AIList Digest V1 #56
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Sep 83 17:28:17 PDT
Delivery-Notice: While sending this message to SU-AI.ARPA, the
SRI-AI.ARPA mailer was obliged to send this message in 50-byte
individually Pushed segments because normal TCP stream transmission
timed out. This probably indicates a problem with the receiving TCP
or SMTP server. See your site's software support if you have any questions.
Date: Friday, September 9, 1983 3:36PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #56
To: AIList@SRI-AI
AIList Digest Saturday, 10 Sep 1983 Volume 1 : Issue 56
Today's Topics:
Professional Activities - JACM Referees & Inst. for Retraining in CS,
Artificial Languages - Loglan,
Knowledge Representation - Multiple Inheritance Query,
Games - Puzzle & Go Tournament
----------------------------------------------------------------------
Date: 8 Sep 83 10:33:25 EDT
From: Sri <Sridharan@RUTGERS.ARPA>
Subject: referees for JACM (AI area)
Since the time I became the AI Area Editor for the JACM, I have found
myself handicapped for lack of a current roster of referees. This
note is to ask you to volunteer to referee papers for the journal.
JACM is the major outlet for theoretical papers in computer science.
In the area of AI most of the submissions in the past have ranged over
the topics of Automated Reasoning (Theorem Proving, Deduction,
Induction, Default) and Automated Search (Search methods, state-space
algorithms, And/Or reduction searches, analysis of efficiency and
error and attendant tradeoffs). Under my editorship I would like to
broader the scope to THEORETICAL papers in all areas of AI, including
Knowledge Representation, Learning, Modeling (Space, Time, Causality),
Problem Formulation & Reformulation etc.
If you are willing to be on the roster of referees, please send me a
note with your name, mailing address, net-address and telephone
number. Please also list your areas of interest and competence.
If you wish to submit a paper please follow the procedures described
in the "instructions to authors" page of the journal. Copies of mss
can be sent to either me or to the Editor-in-Chief.
N.S. Sridharan [Sridharan@Rutgers] Area Editor, AI JACM
------------------------------
Date: Wed, 7 Sep 83 16:06 PDT
From: Jeff Ullman <ullman@Diablo>
Subject: Institute for Retraining in CS
[Reprinted from the SU-SCORE BBoard.]
A summer institute for retraining college faculty to teach computer
science is being held at Clarkson College, Potsdam, NY, this summer,
under the auspices of a joint ACM/MAA committee. They need lecturers
in all areas of computer science, to deliver 1-month courses. People
at or close to the PH. D. level are needed. If interested, contact Ed
Dubinsky at 315-268-2382 (office) 315-265-2906 (home).
------------------------------
Date: 6 Sep 83 18:15:17-PDT (Tue)
From: harpo!gummo!whuxlb!pyuxll!abnjh!icu0 @ Ucb-Vax
Subject: Re: Loglan
Article-I.D.: abnjh.236
[Directed to Pourne@MIT-MC]
1. Rumor has it that SOMEONE at the Univ. of Washington (State of, NOT
D.C.) was working on the [LOGLAN] grammar online (UN*X, as I recall).
I haven't yet had the temerity to post a general inquiry regarding
their locale. If they read your request and respond, please POST
it...some of us out here are also interested.
2. A friend of mine at Ohio State has typed in (by hand!) the glossary
from Vol 1 (the laymans grammar) which could be useful for writing a
"flashcard" program, but both of us are too busy.
Art Wieners
(who will only be at this addr for this week,
but keep your modems open for a resurfacing
at da Labs...)
------------------------------
Date: 7 Sep 83 16:43:58-PDT (Wed)
From: decvax!genrad!grkermit!chris @ Ucb-Vax
Subject: Re: Loglan
Article-I.D.: grkermit.654
I just posted something relevant to net.nlang. (I'm not sure which is
more appropriate, but I'm going to assume that "natural" language is
closer than all of Artificial Intelligence.)
I sent a request for information to the Loglan Institute, (Route 10,
Box 260 Gainesville, FL 32601 [a NEW address]) and they are just about
to go splashily public again. I posted the first page of their reply
letter, see net.nlang for more details. Later postings will cover
their short description of their Interactive Parser, which is among
their many new or improved offerings.
decvax!genrad!grkermit!chris
allegra!linus!genrad!grkermit!chris
harpo!eagle!mit-vax!grkermit!chris
------------------------------
Date: 2-Sep-83 19:33 PDT
From: Kirk Kelley <KIRK.TYM@OFFICE-2>
Subject: Multiple Inheritance query
Can you tell me where I can find a discussion of the anatomy and value
of multiple inheritance? I wonder if it is worth adding this feature
to the design for a lay-person's language, called Players, for
specifying adventures.
-- kirk
------------------------------
Date: 24 August 1983 1536-PDT (Wednesday)
From: Foonberg at AEROSPACE (Alan Foonberg)
Subject: Another Puzzle
[Reprinted from the Prolog Digest.]
I was glancing at an old copy of Games magazine and came across the
following puzzle:
Can you find a ten digit number such that its left-most digit tells
how many zeroes there are in the number, its second digit tells how
many ones there are, etc.?
For example, 6210001000. There are 6 zeroes, 2 ones, 1 two, no
threes, etc. I'd be interested to see any efficient solutions to this
fairly simple problem. Can you derive all such numbers, not only
ten-digit numbers? Feel free to make your own extensions to this
problem.
Alan
------------------------------
Date: 5 Sep 83 20:11:04-PDT (Mon)
From: harpo!psl @ Ucb-Vax
Subject: Go Tournament
Article-I.D.: harpo.1840
ANNOUNCING
The First Ever
USENIX
COMPUTER
##### #######
# # # #
# # #
# #### # #
# # # #
# # # #
##### #######
##### #### # # ##### # # ## # # ###### # # #####
# # # # # # # ## # # # ## ## # ## # #
# # # # # # # # # # # # # ## # ##### # # # #
# # # # # ##### # # # ###### # # # # # # #
# # # # # # # # ## # # # # # # ## #
# #### #### # # # # # # # # ###### # # #
A B C D E F G H j K L M N O P Q R S T
19 + + + + + + + + + + + + + + + + + + + 19
18 + + + + + + + + + + + + + + + + + + + 18
17 + + + O @ + + + + + + + + + + + + + + 17
16 + + + O + + + O + @ + + + + + @ + + + 16
15 + + + + + + + + + + + + + + + + + + + 15
14 + + O O + + + O + @ + + + + + + + + + 14
13 + + @ + + + + + + + + + + + + + + + + 13
12 + + + + + + + + + + + + + + + + + + + 12
11 + + + + + + + + + + + + + + + + + + + 11
10 + + + + + + + + + + + + + + + + + + + 10
9 + + + + + + + + + + + + + + + + + + + 9
8 + + + + + + + + + + + + + O O O O @ + 8
7 + + O @ + + + + + + + + + O @ @ @ @ @ 7
6 + + @ O O + + + + + + + + + O O O @ + 6
5 + + O + + + + + + + + + + + + O @ @ + 5
4 + + + O + + + + + + + + + + + O @ + + 4
3 + + @ @ + @ + + + + + + + + @ @ O @ + 3
2 + + + + + + + + + + + + + + + + + + + 2
1 + + + + + + + + + + + + + + + + + + + 1
A B C D E F G H j K L M N O P Q R S T
To be held during the Summer 1984 Usenix conference in Salt Lake
City, Utah.
Probable Rules
-------- -----
1) The board will be 19 x 19.
This size was chosen rather than one of the smaller boards because
there is a great deal of accumulated Go "wisdom" that would be
worthless on smaller boards.
2) The board positions will be numbered as in the diagram above. The
columns will be labeled 'A' through 'T' (excluding 'I') left to
right. The rows will be labeled '19' through '1', top to bottom.
3) Play will continue until both programs pass in sequence. This may
be a trouble spot, but looks like the best approach available.
Several alternatives were considered: (1) have the referee decide
when the game is over by identifying "uncontested" versus "contested"
area; (2) limit the game to a certain number of moves; all of them
had one or another unreasonable effect.
4) There will be a time limit for each program. This will be in the
form of a limit on accumulated "user" time (60 minutes?). If a
program goes over the time limit it will be allowed some minimum
amount of time for each move (15 seconds?). If no move is generated
within the minimum time the game is forfeit.
5) The tournament will use a "referee" program to execute each
competing pair of programs; thus the programs must understand a
standard set of commands and generate output of a standard form.
a) Input to the program. All input commands to the program will
be in the form of lines of text appearing on the standard
input and terminated by a newline.
1) The placement of a stone will be expressed as
letter-number (e.g. "G7"). Note that the letter "I"
is not included.
2) A pass will be expressed as "pass".
3) The command "time" means the time limit has been exceeded
and all further moves must be generated within the shorter
minimum time limit.
b) Output from the program. All output from the program will be
in the form of lines of characters sent to the "standard
output" (terminated by a newline) and had better be unbuffered.
1) The placement of a stone will be expressed as
letter-number, as in "G12". Note that the letter "I"
is not included.
2) A pass will be expressed as "pass".
3) Any other output lines will be considered garbage and
ignored.
4) Any syntactically correct but semantically illegal move
(e.g. spot already occupied, ko violation, etc.) will be
considered a forfeit.
The referee program will maintain a display of the board, the move
history, etc.
6) The general form of the tournament will depend on the number of
participants, the availability of computing power, etc. If only a
few programs are entered each program will play every other program
twice. If many are entered some form of Swiss system will be used.
7) These rules are not set in concrete ... yet; this one in
particular.
Comments, suggestions, contributions, etc. should be sent via uucp
to harpo!psl or via U.S. Mail to Peter Langston / Lucasfilm Ltd. /
P.O. Box 2009 / San Rafael, CA 94912.
For the record: I am neither "at Bell Labs" nor "at Usenix", but
rather "at" a company whose net address is a secret (cough, cough!).
Thus notices like this must be sent through helpful intermediaries
like Harpo. I am, however, organizing this tournament "for" Usenix.
------------------------------
End of AIList Digest
********************
∂10-Sep-83 0017 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #26
Received: from SU-SCORE by SU-AI with TCP/SMTP; 10 Sep 83 00:17:06 PDT
Date: Friday, September 9, 1983 6:03AM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #26
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Saturday, 10 Sep 1983 Volume 1 : Issue 26
Today's Topics:
Representation - Transitive Relations,
Puzzle - Solution to Alan's,
Implementations - Prolog in Lisp
----------------------------------------------------------------------
Date: Wed, 7 Sep 83 15:30:31 PDT
From: (Carl Ponder) Ponder%UCBKim@Berkeley
Subject: Transitive Relations, Oracles, Etc.
I was interested in seeing Vivak's solutions to the transitive(R)
relation; it supposedly checks whether clause R is transitive.
The form was something like this:
transitive( R ) :- not( nontransitive(R) ).
nontransitive( R ) :- R(A, B), R(B, C), not( R(A, C) ).
Under my weak knowledge of Prolog I had always assumed a clause was
satisfied thru an enumeration of elements generated by a unification
set; I never expected it could prove anything as general as this,
since {A,B,C} are universally quantified. Expecting an exhaustive
enumeration of all possible values of {A,B,C} is absurd, since the
structure of "nontransitive" appears to describe the recursively
enumerable set of nontransitive clauses rather than recursively
deciding the question of nontransitivity.
I don't beleive nontransitivity ( and thus transitivity ) is
recursive, the following example should show that
non-transitivity is RE-complete & therefore nonrecursive:
W(X,Y) :- <for integers X & Y> if X & Y both halt on
input "42", then return true.
Of course it takes some hacking to write W out; the integer arguments
are regarded as turing-machine encodings. Of course if both X&Y halt
for the input, and Y&Z halt for the input, then X&Z halt, so the
clause would be transitive over its domain. However, this domain is
not recursively enumerable.
Does the DEC-10 Prolog interpreter have an oracle ?
-- Carl Ponder
------------------------------
Date: Wed 7 Sep 83 11:08:08-PDT
From: Vivek Sarkar <JLH.Vivek@SU-SIERRA>
Subject: Solution to Alan Foonberg's Number Puzzle
Here is a general solution to the puzzle posed by Alan Foonberg:
My generalisation is to consider n-digit numbers in base n. The
digits can therefore take on values in the range 0 .. n-1 .
A summary of the solution is:
n = 4: 1210
n >= 7: (n-4) 2 1 0 0 ... 0 0 1 0 0 0
<--------->
(n-7) 0's
Further these describe ALL possible solutions, I.e. radix values of
2,3,5,6 have no solutions, and other values have exactly one solution
for each radix.
Proof:
Case 2 <= n <= 6:
Consider these as singular cases. It is simple to show that there
are no solutions for 2,3,5,6 and that 1210 is the only solution for
4. You can do this by writing a program to generate all solutions
for a given radix. ( I did that; unfortunately it works out better
in Pascal than Prolog ! )
CASE n >= 7:
It is easy to see that the given number is indeed a solution. ( The
rightmost 1 represents the single occurrence of (n-4) at the
beginning ). For motivation, we can substitute n=10 and get
6210001000, which was the decimal solution provided by Alan.
The tough part is to show that this represents the only solution,
for a given radix. We do this by considering all possible values
for the first digit ( call it d0 ) and showing that d0=(n-4) is
the only one which can lead to a solution.
SUBCASE d0 < (n-4):
Let d0 = n-4-j, where j>=1. Therefore the number has (n-4-j) 0's,
which leaves (j+3) non-zero digits apart from d0. Further these
(j+3) digits must add up to (j+4). ( The sum of the digits of a
solution must be n, as there are n digits in the number, and the
value of each digit contributes to a frequency count of digits
with its positional value). The only way that (j+3) non-zero
digits can add up to (j+4) is by having (j+2) 1's and one 2.
If there are (j+2) 1's, then the second digit from the left,
which counts the number of 1's (call it d1) must = (j+2).
Since j >= 1, d1=(j+2) is neither a 1 nor a 2. Contradiction !
SUBCASE d0 > (n-4):
This leads to 3 possible values for d0: (n-1), (n-2) & (n-3).
It is simple to consider each value and see that it can't
possibly lead to a solution, by using an analysis similar to
the one above.
We therefore conclude that d0=(n-4), and it is straightforward
to show that the given solution is the only possible one, for
this value of d0.
-- Q.E.D.
------------------------------
Date: Thu, 8 Sep 1983 04:26 EDT
From: Ken%MIT-OZ@MIT-MC
Subject: Reply to Pereira
Pereira criticizes Prologs in Lisp by saying that tools should be
written in Prolog. Depending upon what he means by tools I would
agree. LM-Prolog, for instance, has its trace package written
entirely in Prolog. It uses and extends ( in Lisp ) the Zwei editor
however. It may be the case that an editor of Zwei's capibility
should be written in Prolog but its an enormous job. I would guess
that the number of man-years of effort to create Zwei is greater than
LM-Prolog and DEC-10/20 Prolog together. If you have a group of at
least 20 people who want to reproduce the Lisp Machine's capibilities
in Prolog ( as it seems the Japanese do ) then go ahead. Since we are
two people we decide instead to make minor extensions to the Lisp
Machine's editor, debugger, rubout handler, etc. to handle LM-Prolog
predicates as well as Lisp objects. I agree that one should put
effort into integrating things like screen management into a Prolog
environment. Its an excellent long-term research project -- not
something likely to be done soon.
I feel I should defend LM-Prolog because I expect it to be the best
thing around for the next year or two.
1. Its true that LM-Prolog uses many Lisp machine subprimitives and
that we have written a few ourselves ( without them it runs at about
half speed ). But it is Lisp in that it interfaces smoothly with Lisp
programs and tools. Various parts of the system are user-extensible (
in Lisp ).
2. LM-Prolog did heavily use stack groups for non-determinism. It now
uses "success continuations" which are passed down as arguments. This
means that nondeterministic as well as deterministic "procedure calls"
map to Lisp function applications. Backtracking is no longer
especially inefficient.
3. Its true that by open coding unification the code is not very
compact. But I don't see why this isn't equally true of Dec 10/20
Prolog.
Regarding the question of GC and using cons space for the environment:
What corresponds to the global stack in Dec 10/20 Prolog is cons space
in LM-Prolog. And we do "use certain nonportable tricks" so that even
that space need not be gc-ed.
I would like to hear what the "artificial performance and language
barriers" which Pereira expects will be hit by Lisp-based Prologs. It
would seem that if any existed a few hand-coded micro-code primitives
would fix the problem. Presumably the alternative he is suggesting is
to implement a Prolog abstract machine in micro-code. How is that so
different? It seems that we take primitives designed for Lisp plus a
few extra while Pereira advocates implementing the entire set of
primitives with Prolog in mind. Our experience has been that the set
of primitives that the Lisp machine provides are good enough. I
wonder how this micro-coded Prolog machine will interface to Lisp or
will the entire Lisp machine software be reproduced in Prolog?
I agree that my approach is short-term ( say 2 years maybe ) but I
repeat that the alternative is a much larger job than Pereira seems to
hint. I base this mostly on first-hand observations of the Lisp
machine project over the last 9 years. I admit there is a trade-off
between the goal of producing a "day-to-day" system that do standard
things well and adding all sorts of "goodies". The difference between
Lisp 1.5 or Standard Lisp and Lisp Machine Lisp ( Zeta Lisp ) is just
a lot of "goodies". But I would dread the thought of building all the
Lisp machine software in Lisp 1.5. To me the analogy "Zeta Lisp is to
Lisp 1.5" as "LM-Prolog is Dec 10/20 Prolog" is good one.
------------------------------
End of PROLOG Digest
********************
∂13-Sep-83 1349 @SU-SCORE.ARPA:CAB@SU-AI hives, smoke, etc.
Received: from SU-SCORE by SU-AI with TCP/SMTP; 13 Sep 83 13:49:26 PDT
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Tue 13 Sep 83 13:49:51-PDT
Date: 13 Sep 83 1348 PDT
From: Chuck Bigelow <CAB@SU-AI>
Subject: hives, smoke, etc.
To: faculty@SU-SCORE
While I agree wholeheartedly with all the recent mail that suggests we drop the
issue of the alleged slur on "software designers", I cannot resist telling you
all of the superb research done by Lynn Ruggles on this problem. She has
single-handedly located a copy of Mandeville's Fable of the Bees, in our very
own Stanford Library no less, and has made a copy of the whole thing, plus later
commentary, and placed it at the front desk of our very own computer science
department, for our reference, edification, and delectation.
Now, no one can claim that we are not cultured men and women of letters --
fully as erudite and cognizant of obscure verse treatises as our colleagues
in the arts and sciences.
--Chuck Bigelow
∂14-Sep-83 2203 @SU-SCORE.ARPA:ROD@SU-AI Departmental Lecture Series
Received: from SU-SCORE by SU-AI with TCP/SMTP; 14 Sep 83 22:03:01 PDT
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Wed 14 Sep 83 22:02:54-PDT
Date: 14 Sep 83 2001 PDT
From: Rod Brooks <ROD@SU-AI>
Subject: Departmental Lecture Series
To: faculty@SU-SCORE
CS200 Departmental Lecture Series is described as "Weekly
presentations by members of the department, each describing
informally his or her current research interests and views
of computer science as a whole."
To reserve a slot (2:45 Thursdays) in this series volunteer to me.
Rod Brooks
∂15-Sep-83 1314 ELYSE@SU-SCORE.ARPA Updating of Faculty Interests for 83-84
Received: from SU-SCORE by SU-AI with TCP/SMTP; 15 Sep 83 13:14:07 PDT
Date: Thu 15 Sep 83 13:14:19-PDT
From: Elyse Krupnick <ELYSE@SU-SCORE.ARPA>
Subject: Updating of Faculty Interests for 83-84
To: CSD-Faculty: ;
cc: Yearwood@SU-SCORE.ARPA
Stanford-Phone: (415) 497-9746
I need to update the Fac. Interest lists before the new students come in.
I'll be sending each of you the blurb from last year and I'd like you to
either update it or let me know it's okay as it stands. Please do this as
soon as you can. Thanks so much, Elyse.
-------
∂15-Sep-83 1320 LENAT@SU-SCORE.ARPA Colloquium
Received: from SU-SCORE by SU-AI with TCP/SMTP; 15 Sep 83 13:20:03 PDT
Date: Thu 15 Sep 83 13:20:25-PDT
From: Doug Lenat <LENAT@SU-SCORE.ARPA>
Subject: Colloquium
To: faculty@SU-SCORE.ARPA
As before, our colloquium is Tuesday, 4:15, in Terman Aud.
I am currently assembling the Fall schedule, and would
greatly welcome suggestions and volunteers to speak. If you
have a visitor (for a week or a quarter or just a Tuesday) who
would be an appropriate speaker, please let me know.
If you have some meaterial you'd like to present to the
department as a whole, I encourage that, too. Finally,
I would like suggestions about outside speakers to invite,
people you'd like to hear. Finally, (you've heard THAT
before) tell me how you feel about an Invited Speakers
programme, in which we invite a handful of distinguished
computer scientists, publicize their talks, and pay them handsome
honoraria. If you like the idea, I expect I can get the
Computer Forum to set up a fund for that purpose, but I
want to sample your opinions before proceeding.
Doug Lenat
-------
∂15-Sep-83 1354 @SU-SCORE.ARPA:TW@SU-AI
Received: from SU-SCORE by SU-AI with TCP/SMTP; 15 Sep 83 13:53:49 PDT
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Thu 15 Sep 83 13:46:14-PDT
Date: 15 Sep 83 1344 PDT
From: Terry Winograd <TW@SU-AI>
To: lenat@SU-SCORE, faculty@SU-SCORE
On the distinguished speakers program, it's not clear to me
how much we get for our money. Our students have the good
fortune of being exposed to many top people just by proximity
and people coming through. It's not clear a lot more can
be gained by using our money in this way. --t
∂15-Sep-83 1511 cheriton%SU-HNV.ARPA@SU-SCORE.ARPA Re: Colloquium
Received: from SU-SCORE by SU-AI with TCP/SMTP; 15 Sep 83 15:11:20 PDT
Received: from Diablo by Score with Pup; Thu 15 Sep 83 15:11:17-PDT
Date: Thu, 15 Sep 83 15:02 PDT
From: David Cheriton <cheriton@Diablo>
Subject: Re: Colloquium
To: LENAT@SU-Score, faculty@SU-Score
I think the distinguished invited speaker idea is good providing
someone is willing to do it right. I think what Terry W. says
is right except many distinguished researchers do not come here
specifically to give a talk at Stanford but just as part of doing
something else in the area. Giving more attention to a visit could
improve the benefit to us and students. Also, there are people I
would like to visit that dont tend to travel without some prompting.
∂15-Sep-83 2007 LAWS@SRI-AI.ARPA AIList Digest V1 #57
Received: from SRI-AI by SU-AI with TCP/SMTP; 15 Sep 83 20:05:40 PDT
Date: Thursday, September 15, 1983 4:57PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #57
To: AIList@SRI-AI
AIList Digest Friday, 16 Sep 1983 Volume 1 : Issue 57
Today's Topics:
Artificial Intelligence - Public Recognition,
Programming Languages - Multiple Inheritance & Micro LISPs,
Query Systems - Talk by Michael Hess,
AI Architectures & Prolog - Talk by Peter Borgwardt,
AI Architectures - Human-Nets Reprints
----------------------------------------------------------------------
Date: 10 Sep 1983 21:44:16-PDT
From: Richard Tong <fuzzy1@aids-unix>
Subject: "some guy named Alvey"
John Alvey is Senior Director, Technology, at British Telecom. The
committee that he headed reported to the British Minister for
Information Technology in September 1982 ("A Program for Advanced
Information Technology", HMSO 1982).
The committee was formed in response to the announcement of the
Japanese 5th Generation Project at the behest of the British
Information Technology Industry.
The major recommendations were for increased collaboration within
industry, and between industry and academia, in the areas of Software
Engineering, VLSI, Man-Machine Interfaces and Intelligent
Knowledge-Based Systems. The recommended funding levels being
approximately: $100M, $145M, $66M and $40M respectively.
The British Government's response was entirely positive and resulted
in the setting up of a small Directorate within the Department of
Industry. This is staffed by people from industry and supported by
the Government.
The most obvious results so far have been the creation of several
Information Technology posts in various universities. Whether the
research money will appear as quickly remains to be seen.
Richard.
------------------------------
Date: Mon 12 Sep 83 22:35:21-PDT
From: Edward Feigenbaum <FEIGENBAUM@SUMEX-AIM>
Subject: The world turns; would you believe...
[Reprinted from the SU-SCORE bboard.]
1. A thing called the Wall Street Computer Review, advertising
a conference on computers for Wall Street professionals, with
keynote speech by Isaac Asimov entitled "Artificial Intelligence
on Wall Street"
2. In the employment advertising section of last Sunday's NY Times,
Bell Labs (of all places!) showing Expert Systems prominently
as one of their areas of work and need, and advertising for people
to do Expert Systems development using methods of Artificial
Intelligence research. Now I'm looking for a big IBM ad in
Scientific American...
3. In 2 September SCIENCE, an ad from New Mexico State's Computing
Research Laboratory. It says:
"To enhance further the technological capabilities of New Mexico, the
state has funded five centers of technical excellence including
Computing Research Laboratory (CRL) at New Mexico State University.
...The CRL is dedicated to interdisciplinary research on knowledge-
based systems"
------------------------------
Date: 15 Sep 1983 15:28-EST
From: David.Anderson@CMU-CS-G.ARPA
Subject: Re: Multiple Inheritance query
For a discussion of multiple inheritance see "Multiple Inheritance in
Smalltalk-80" by Alan Borning and Dan Ingalls in the AAAI-82
proceedings. The Lisp Machine Lisp manual also has some justification
for multiple inheritance schemes in the chapter on Flavors.
--david
[See also any discussion of the LOOPS language, e.g., in the
Fall issue of AI Magazine. -- KIL]
------------------------------
Date: Wed 14 Sep 83 19:16:41-EDT
From: Ted Markowitz <TJM@COLUMBIA-20.ARPA>
Subject: Info on micro LISP dialects
Has anyone evaluated verions of LISP that run on micros? I'd like to
find out what's already out there and people's impressions of them.
The hardware would be something in the nature of an IBM PC or a DEC
Rainbow.
--ted
------------------------------
Date: 12 Sep 1983 1415-PDT
From: Ichiki
Subject: Talk by Michael Hess
[This talk will be given at the SRI AI Center. Visitors
should come to E building on Ravenswood Avenue in Menlo
Park and call Joani Ichiki, x4403.]
Text Based Question Answering Systems
-------------------------------------
Michael Hess
University of Texas, Austin
Friday, 16 September, 10:30, EK242
Question Answering Systems typically operate on Data Bases consisting
of object level facts and rules. This, however, limits their
usefulness quite substantially. Most scientific information is
represented as Natural Language texts. These texts provide relatively
few basic facts but do give detailed explanantions of how they can be
interpreted, i.e. how the facts can be linked with the general laws
which either explain them, or which can be inferred from them. This
type of information, however, does not lend itself to an immediate
representation on the object level.
Since there are no known proof procedures for higher order logics we
have to find makeshift solutions for a suitable text representation
with appropriate interpretation procedures. One way is to use the
subset of First Order Predicate Calculus as defined by Prolog as a
representation language, and a General Purpose Planner (implemented in
Prolog) as an interpreter. Answering a question over a textual data
base can then be reduced to proving the answer in a model of the world
as described in the text, i.e. to planning a sequence of actions
leading from the state of affairs given in the text to the state of
affairs given in the question. The meta-level information contained in
the text is used as control information during the proof, i.e. during
the execution of the simulation in the model. Moreover, the format of
the data as defined by the planner makes explicit some kinds of
information particularly often addressed in questions.
The simulation of an experiment in the Blocks World, using the kind of
meta-level information important in real scientific experiments, can
be used to generate data which, when generalised, could be used
directly as DB for question answering about the experiment.
Simultaneously, it serves as a pattern for the representation of
possible texts describing the experiment. The question of how to
translate NL questions and NL texts, into this kind of format,
however, has yet to be solved.
------------------------------
Date: 12 Sep 1983 1730-PDT
From: Ichiki
Subject: Talk by Peter Borgwardt
[This talk will be given at the SRI AI Center. Visitors
should come to E building on Ravenswood Avenue in Menlo
Park and call Joani Ichiki, x4403.]
There will be a talk given by Peter Borgwardt on Monday, 9/19 at
10:30am in Conference Room EJ222. Abstract follows:
Parallel Prolog Using Stack Segments
on Shared-memory Multiprocessors
Peter Borgwardt
Computer Science Department
University of Minnesota
Minneapolis, MN 55455
Abstract
A method of parallel evaluation for Prolog is presented for
shared-memory multiprocessors that is a natural extension of the
current methods of compiling Prolog for sequential execution. In
particular, the method exploits stack-based evaluation with stack
segments spread across several processors to greatly reduce the need
for garbage collection in the distributed computation. AND
parallelism and stream parallelism are the most important sources of
concurrent execution in this method; these are implemented using local
process lists; idle processors may scan these and execute any process
as soon as its consumed (input) variables have been defined by the
goals that produce them. OR parallelism is considered less important
but the method does implement it with process numbers and variable
binding lists when it is requested in the source program.
------------------------------
Date: Wed, 14 Sep 83 07:31 PDT
From: "Glasser Alan"@LLL-MFE.ARPA
Subject: human-nets discussion on AI and architecture
Ken,
I see you have revived the Human-nets discussion about AI and
computer architecture. I initiated that discussion and saved all
the replies. I thought you might be interested. I'm sending them
to you rather than AILIST so you can use your judgment about what
if anything you might like to forward to AILIST.
Alan
[The following is the original message. The remainder of this
digest consists of the collected replies. I am not sure which,
if any, appeared in Human-Nets. -- KIL]
---------------------------------------------------------------------
Date: 4 Oct 1982 (Monday) 0537-EDT
From: GLASSER at LLL-MFE
Subject: artificial intelligence and computer architecture
I am a new member of the HUMAN-NETS interest group. I am also
newly interested in Artificial Intelligence, partly as a result of
reading "Goedel,Escher,Bach" and similar recent books and articles
on AI. While this interest group isn't really about AI, there isn't
any other group which is, and since this one covers any computer
topics not covered by others, this will do as a forum.
From what I've read, it seems that most or all AI work now
being done involves using von Neumann computer programs to model
aspects of intelligent behavior. Meanwhile, others like Backus
(IEEE Spectrum, August 1982, p.22) are challenging the dominance of
von Neumann computers and exploring alternative programming styles
and computer architectures. I believe there's a crucial missing link
in understanding intelligent behavior. I think it's likely to
involve the nature of associative memory, and I think the key to it
is likely to involve novel concepts in computer architecture.
Discovery of the structure of associative memory could have an
effect on AI similar to that of the discovery of the structure of
DNA on genetics. Does anyone out there have similar ideas? Does
anyone know of any research and/or publications on this sort of
thing?
---------------------------------------------------------------------
Date: 15 Oct 1982 1406-PDT
From: Paul Martin <PMARTIN at SRI-AI>
Subject: Re: HUMAN-NETS Digest V5 #96
Concerning the NON-VON project at Columbia, David Shaw, formerly of
the Stanford A. I. Lab, is using the development of some
non-VonNeuman hardware designs to make an interesting class of
database access operations no longer require times that are
exponential with the size of the db. He wouldn't call his project
AI, but rather an approach to "breaking the VonNeuman bottleneck"
as it applies to a number of well-understood but poorly solved
problems in computing.
---------------------------------------------------------------------
Date: 28 Oct 1982 1515-EDT
From: David F. Bacon
Subject: Parallelism and AI
Reply-to: Columbia at CMU-20C
Parallel Architectures for Artificial Intelligence at Columbia
While the NON-VON supercomputer is expected to provide significant
performance improvements in other areas as well, one of the
principal goals of the project is the provision of highly efficient
support for large-scale artificial intelligence applications. As
Dr. Martin indicated in his recent message, NON-VON is particularly
well suited to the execution of relational algebraic operations. We
believe, however, that such functions, or operations very much like
them, are central to a wide range of artificial intelligence
applications.
In particular, we are currently developing a parallel version of the
PROLOG language for NON-VON (in addition to parallel versions of
Pascal, LISP and APL). David Shaw, who is directing the NON-VON
project, wrote his Ph.D. thesis at the Stanford A.I. Lab on a
subject related to large-scale parallel AI operations. Many of the
ideas from his dissertation are being exploited in our current work.
The NON-VON machine will be constructed using custom VLSI chips,
connected according to a binary tree-structured topology. NON-VON
will have a very "fine granularity" (that is, a large number of very
small processors). A full-scale NON-VON machine might embody on the
order of 1 million processing elements. A prototype version
incorporating 1000 PE's should be running by next August.
In addition to NON-VON, another machine called DADO is being
developed specifically for AI applications (for example, an optimal
running time algorithm for Production System programs has already
been implemented on a DADO simulator). Professor Sal Stolfo is
principal architect of the DADO machine, and is working in close
collaboration with Professor Shaw. The DADO machine will contain a
smaller number of more powerful processing elements than NON-VON,
and will thus have a a "coarser" granularity. DADO is being
constructed with off-the-shelf Intel 8751 chips; each processor will
have 4K of EPROM and 8K of RAM.
Like NON-VON, the DADO machine will be configured as a binary tree.
Since it is being constructed using "off-the-shelf" components, a
working DADO prototype should be operational at an earlier date than
the first NON-VON machine (a sixteen node prototype should be
operational in three weeks!). While DADO will be of interest in its
own right, it will also be used to simulate the NON-VON machine,
providing a powerful testbed for the investigation of massive
parallelism.
As some people have legitimately pointed out, parallelism doesn't
magically solve all your problems ("we've got 2 million processors,
so who cares about efficiency?"). On the other hand, a lot of AI
problems simply haven't been practical on conventional machines, and
parallel machines should help in this area. Existing problems are
also sped up substantially [ O(N) sort, O(1) search, O(n↑2) matrix
multiply ]. As someone already mentioned, vision algorithms seem
particularly well suited to parallelism -- this is being
investigated here at Columbia.
New architectures won't solve all of our problems -- it's painfully
obvious on our current machines that even fast expensive hardware
isn't worth a damn if you haven't got good software to run on it,
but even the best of software is limited by the hardware. Parallel
machines will overcome one of the major limitations of computers.
David Bacon
NON-VON/DADO Research Group
Columbia University
------------------------------
Date: 7 Nov 82 13:43:44 EST (Sun)
From: Mark Weiser <mark.umcp-cs@UDel-Relay>
Subject: Re: Parallelism and AI
Just to mention another project, The CS department at the University
of Maryland has a parallel computing project called Zmob. A Zmob
consists of 256 Z-80 processors called moblets, each with 64k
memory, connected by a 48 bit wide high speed shift register ring
network (100ns/shift, 25.6us/revolution) called the "conveyer
belt". The conveyer belt acts almost like a 256x256 cross-bar since
it rotates faster than a z-80 can do significant I/O, and it also
provides for broadcast messages and messages sent and received by
pattern match. Each Z-80 has serial and parallel ports, and the
whole thing is served by a Vax which provides cross-compiling and
file access.
There are four projects funded and working on Zmob (other than the
basic hardware construction), sponsored by the Air Force. One is
parallel numerical analysis, matrix calculations, and the like (the
Z-80's have hardware floating point). The second is parallel image
processing and vision. The third is distributed problem solving
using Prolog. The fourth (mine) is operating systems and software,
developing remote-procedure-call and a distributed version of Unix
called Mobix.
A two-moblet prototype was working a year and half ago, and we hope
to bring up a 128 processor version in the next few months. (The
boards are all PC'ed and stuffed but timing problems on the bus are
temporarily holding things back).
------------------------------
End of AIList Digest
********************
∂16-Sep-83 1326 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #27
Received: from SU-SCORE by SU-AI with TCP/SMTP; 16 Sep 83 13:23:07 PDT
Date: Thursday, September 15, 1983 11:03PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #27
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Friday, 16 Sep 1983 Volume 1 : Issue 27
Today's Topics:
Implementations - Prolog in Lisp & FooLog,
Puzzle - Solution,
Representation - Declaring Predicates Transitive
----------------------------------------------------------------------
Date: Sat 10 Sep 83 09:58:52-PDT
From: Pereira@SRI-AI
Subject: Prolog in Lisp ( again... )
The Symbolics 3600 hardware, used right, is likely to deliver
around 100,000 LIPS, if not more. This kind of use is not
accessible from Lisp, even with subprimitives. That's what I
meant by "artificial limitations".
I didn't say that Everything should be written in Prolog. However,
lots of things ( such as the debugger ) Should that are not in most
systems ( I don't know about the latest LM-Prolog ). That is only
practical if the basic compiler is efficient enough in space and time.
As to the question of whether ZetaLisp/Lisp 1.5 = LM-Prolog/DEC-20
Prolog, I don't want to engage in polemics, so I will leave it to
independent users of both systems to decide. My feeling about
the subject, however, is that ZetaLisp's "conceptual homogeneity"
is far better than LM-Prolog's. ( Another point of contention is
whether all those "goodies" in LM-Prolog are even marginally
justifiable from a logic programming standpoint. DEC-10/20 Prolog
is already shaky from that point of view as it is ).
-- Fernando
------------------------------
Date: Mon, 12 Sep 1983 05:40 EDT
From: Ken%MIT-OZ@MIT-MC
[ I'm forwarding the following message from Martin Nilsson. - Ken ]
Earlier I said that the FooLog compiler compiled
naive-reverse to half the speed of the Dec-10
Prolog compiler. That is unfortunately incorrect.
The speed is only a quarter ( sigh ) of Dec-10
compiled Prolog. I made a mistake when measuring
the speed of Dec-10 Prolog using the predicate STAT:
stat :- statistics(runtime,←),
naive←reverse([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,
1,2,3,4,5,6,7,8,9,10,11,12,13,14,15],X),
statistics(runtime,[←,A2]), A3 is (498*1000/A2),
write((A2,ms,A3,lips)), nl.
The expression 498*1000/A2 gives only half of the real
speed ( Why? Shouldn't this work without overflow ? ).
1000/A2*498 works better. Sorry.
------------------------------
Date: 11 Sep 1983 2:03-EDT
From: Dan Hoey <Hoey@NRL-AIC>
Subject: Number Puzzle Again
I arrived at the same answer that Vivek Sarkar did, except that my
program to generate all solutions found the following for n < 7:
n=0: .
n=4: 1210, 2020.
n=5: 21200.
Thus it is demonstrated that the exhaustive search method works out
even better in Lisp than in Pascal ( perhaps someone will fine even
more solutions in Snobol? ).
Now if someone would write a program to find the proof of the general
result, I would be impressed.
-- Dan
------------------------------
Date: Mon 12 Sep 83 13:58:22-PDT
From: Vivek Sarkar <JLH.Vivek@SU-SIERRA>
Subject: Carl Ponder's Comment About Transitivity
In reply to Carl Ponder's comment about my solution to the
transitive (R) relation, my solution was intended to be restricted
to finite relations.
Perhaps I misunderstood the original problem, but most of that
discussion seemed to center around Prolog programming problems
( mainly infinite recursion ) which people encountered when
considering finite relations. I wanted to find a clean Prolog
solution ( without assert's & retract's ), which had
O( sqr(|R|) ) running time complexity, like that of a
simple-minded procedural language solution.
I agree with Carl that non-transitivity is non-recursive, in
general. But I'm sure that Carl will agree that non-transitivity
of finite relations is indeed recursive, and that was what I had
provided a program for.
This got me thinking about what classes of relations lend
themselves to decidability of transitivity. We already
have two extremes:
i ) The relations are non-recursive, and transitivity is
undecidable.
ii ) The relations are finite, and transitivity is decidable.
What about restricted forms of infinite relations? So far I have
discerned ( with proof ) the following:
1. If the relations are known to be regular ( I.e. the relation
can be defined by a Finite State Machine acceptor with two
parallel, synchronised input streams ) then transitivity
is decidable.
2. If the relations are known to be recursive, then transitivity
is not decidable.
I don't know what the answer is for relations which can be
classified as context-free, though my conjecture is that
transitivity would still not be decidable.
-- Vivek
------------------------------
End of PROLOG Digest
********************
∂16-Sep-83 1714 LAWS@SRI-AI.ARPA AIList Digest V1 #58
Received: from SRI-AI by SU-AI with TCP/SMTP; 16 Sep 83 17:12:40 PDT
Date: Friday, September 16, 1983 4:10PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #58
To: AIList@SRI-AI
AIList Digest Saturday, 17 Sep 1983 Volume 1 : Issue 58
Today's Topics:
Automatic Translation - Ada,
Games - Go Programs & Foonberg's Number Problem,
Artificial Intelligence - Turing Test & Creativity
----------------------------------------------------------------------
Date: 10 Sep 83 13:50:18-PDT (Sat)
From: decvax!wivax!linus!vaxine!wjh12!foxvax1!brunix!rayssd!sdl@Ucb-Vax
Subject: Re: Translation into Ada: Request for Info
Article-I.D.: rayssd.142
There have been a number of translators from Pascal to Ada, the first
successful one I know of was developed at UC Berkeley by P. Albrecht,
S. Graham et al. See the "Source-to-Source Translation" paper in the
1980 Proceedings of Sigplan Symp. on Ada, Dec. 1980.
At Univ. S. Calif. Info. Sci. Institute (USC-ISI), Steve Crocker (now
at the Aerospace Corp.) developed AUTOPSY, a translator from CMS-2 to
Ada. (CMS-2 is the Navy standard language for embedded software.)
Steve Litvintchouk
Raytheon Company
Portsmouth, RI 02871
------------------------------
Date: 10 Sep 83 13:56:17-PDT (Sat)
From: decvax!wivax!linus!vaxine!wjh12!foxvax1!brunix!rayssd!sdl@Ucb-Vax
Subject: Re: Go Tournament
Article-I.D.: rayssd.143
ARE there any available Go programs which run on VAX/UNIX which I
could obtain? (Either commercially sold, or available from
universities, or whatever.)
I find Go fascinating and would love to have a Go program to play
against.
Please reply via USENET, or to:
Steve Litvintchouk
Raytheon Company
Submarine Signal Division
Portsmouth, RI 02871
(401)847-8000 x4018
------------------------------
Date: 14 Sep 1983 16:18-EDT
From: Dan Hoey <hoey@NRL-AIC>
Subject: Alan Foonberg's number problem
I'm surprised you posted Alan Foonberg's number problem on AIlist
since Vivek Sarkar's solution has already appeared (Prolog digest V1
#28). I enclose his solution below. His solution unfortunately omits
the special cases , 2020, and 21200; I have sent a correction to the
Prolog digest.
Dan
------------------------------
Date: Wed 7 Sep 83 11:08:08-PD
From: Vivek Sarkar <JLH.Vivek@SU-SIERRA>
Subject: Solution to Alan Foonberg's Number Puzzle
Here is a general solution to the puzzle posed by Alan Foonberg:
My generalisation is to consider n-digit numbers in base n. The
digits can therefore take on values in the range 0 .. n-1 .
A summary of the solution is:
n = 4: 1210
n >= 7: (n-4) 2 1 0 0 ... 0 0 1 0 0 0
<--------->
(n-7) 0's
Further these describe ALL possible solutions, I.e. radix values of
2,3,5,6 have no solutions, and other values have exactly one solution
for each radix.
Proof:
Case 2 <= n <= 6: Consider these as singular cases. It is simple to
show that there are no solutions for 2,3,5,6 and that 1210 is the only
solution for 4. You can do this by writing a program to generate all
solutions for a given radix. ( I did that; unfortunately it works out
better in Pascal than Prolog ! )
CASE n >= 7: It is easy to see that the given number is indeed a
solution. ( The rightmost 1 represents the single occurrence of (n-4)
at the beginning ). For motivation, we can substitute n=10 and get
6210001000, which was the decimal solution provided by Alan.
The tough part is to show that this represents the only solution, for
a given radix. We do this by considering all possible values for the
first digit ( call it d0 ) and showing that d0=(n-4) is the only one
which can lead to a solution.
SUBCASE d0 < (n-4): Let d0 = n-4-j, where j>=1. Therefore the number
has (n-4-j) 0's, which leaves (j+3) non-zero digits apart from d0.
Further these (j+3) digits must add up to (j+4). ( The sum of the
digits of a solution must be n, as there are n digits in the number,
and the value of each digit contributes to a frequency count of digits
with its positional value). The only way that (j+3) non-zero digits
can add up to (j+4) is by having (j+2) 1's and one 2. If there are
(j+2) 1's, then the second digit from the left, which counts the
number of 1's (call it d1) must = (j+2). Since j >= 1, d1=(j+2) is
neither a 1 nor a 2. Contradiction !
SUBCASE d0 > (n-4): This leads to 3 possible values for d0: (n-1),
(n-2) & (n-3). It is simple to consider each value and see that it
can't possibly lead to a solution, by using an analysis similar to the
one above.
We therefore conclude that d0=(n-4), and it is straightforward to show
that the given solution is the only possible one, for this value of
d0.
-- Q.E.D.
------------------------------
Date: Wed 14 Sep 83 17:25:38-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Re: Alan Foonberg's number problem
Thanks for the note and the correction. I get the Prolog digest
a little delayed, so I hadn't seen the answer at the time I relayed
the problem.
My purpose in sending out the problem actually had nothing to do with
finding the answer. The answer you forwarded is a nice mathematical
proof, but the question is whether and how AI techniques could solve
the problem. Would an AI program have to reason in the same manner as
a mathematician? Would different AI techniques lead to different
answers? How does one represent the problem and the solution in
machine-readable form? Is this an interesting class of problems for
cognitive science to deal with?
I was expecting that someone would respond with a 10-line PROLOG
program that would solve the problem. The discussion that followed
might contrast that with the LISP or ALGOL infrastructure needed to
solve the problem. Now, of course, I don't expect anyone to present
algorithmic solutions.
-- Ken Laws
------------------------------
Date: 9 Sep 83 13:15:56-PDT (Fri)
From: harpo!floyd!cmcl2!csd1!condict @ Ucb-Vax
Subject: Re: in defense of Turing - (nf)
Article-I.D.: csd1.116
A comment on the statement that it is easy to trip up an allegedly
intelligent machine that generates responses by using the input as an
index into an array of possible outputs: Yes, but this machine has no
state and hence hardly qualifies as a machine at all! The simple
tricks you described cannot be used if we augment it to use the entire
sequence of inputs so far as the index, instead of just the most
recent one, when generating its response. This allows it to take into
account sequences that contain runs of identical inputs and to
understand inputs that refer to previous inputs (or even
Hofstadteresque self-referential inputs). My point is not that this
new machine cannot be tripped up but that the one described is such a
straw man that fooling it gives no information about the real
difficulty of programming a computer to pass the Turing test.
------------------------------
Date: 10 Sep 83 22:20:39-PDT (Sat)
From: decvax!wivax!linus!philabs!seismo!rlgvax!cvl!umcp-cs!speaker@Ucb-Vax
Subject: Re: in defense of Turing
Article-I.D.: umcp-cs.2538
It should be fairly obvious that the Turing test is not a precise
test to determine intelligence because the very meaning of the
word 'intellegence' cannot be precisely pinned down, despite what
your Oxford dictionary might say.
I think the idea here is that if a machine can perform such that
it is indistinguishable from the behavior of a human then it can
be said to display human intelligence. Note that I said, "human
intelligence."
It is even debatable whether certain members of the executive branch
can be said to be intelligent. If we can't apply the Turing test
there... then surely we're just spinning our wheels in an attempt
to apply it unhε⎇rsally.
- Speaker
--
Full-Name: Speaker-To-Animals
Csnet: speaker@umcp-cs
Arpa: speaker.umcp-cs@UDel-Relay
This must be hell...all I can see are flames... towering flames!
------------------------------
Date: Wed 14 Sep 83 12:35:11-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: intelligence and genius
[This continues a discussion on Human-Nets. My original statement,
printed below, was shot down by several people. Individuals certainly
derive satisfaction from hobbies at which they will never excel. It
would take much of the fun out of my life, however, if I could not
even imagine excelling at anything because cybernetic life had
surpassed humans in every way. -- KIL]
From: Ken Laws <Laws@SRI-AI.ARPA>
Life will get even worse if AI succeeds in automating true
creativity. What point would there be in learning to paint,
write, etc., if your home computer could knock out more
artistic creations than you could ever hope to master?
I was rather surprised that this suggestion was taken so quickly
as it stands. Most people in AI believe that we will someday create an
"intelligent" machine, but Ken's claim seems to go beyond that;
"automating true creativity" seems to be saying that we can create not
just intelligent, but "genius" systems, at will. The automation of
genius is a more sticky claim in my mind.
For example, if we create an intelligent system, do we make it a
genius system by just turning up the speed or increasing its memory?
That"s like saying a painter could become Rembrandt if he/she just
painted 1000 times more. More likely is that the wrong (or uncreative)
ideas would simply pour out faster, or be remembered longer. Turning
up the speed of the early blind-search chess programs made them
marginally better players, but no more creative.
Or let's say we stumble onto the creation of some genius system,
call it "Einstein". Do we get all of the new genius systems we need by
merely duplicating "Einstein", something impossible to do with human
systems? Again, we hit a dead end... "Einstein" will only be useful in
a small domain of creativity, and will never be a Bach or a Rembrandt
no matter how many we clone. Even more discouraging, if we xerox off
1000 of our "Einstein" systems, do we get 1000 times the creative
ideas? Probably not; we will cover the range of "Einstein's" potential
creativity better, but that's it. Even a genius has only a range of
creativity.
What is it about genius systems that makes them so intractable?
If we will someday create intelligent systems consistently and
reliably, what stands in the way of creating genius systems on demand?
I would suggest that statistics get in our way here; that genius
systems cannot be created out of dust, but that every once in a while,
an intelligent system has the proper conditioning and evolves into a
genius system. In this light, the number of genius systems possible
depends on the pool of intelligent systems that are available as
substrate.
In short, while I feel we will be able to create intelligent
systems, we will not be able to directly construct superintelligent
ones. While there will be advantages in duplicating, speeding up, or
otherwise manipulating a genius system once created, the process of
creating one will remain maddeningly elusive.
David Rogers DRogers@SUMEX-AIM.ARPA
[I would like to stake out a middle ground: creative systems.
We will certainly have intelligent systems, and we will certainly have
trouble devising genius systems. (Genius in human terms: I don't want
to get into whether an AI program can be >>sui generis<< if we can
produce a thousand variations of it before breakfast.) A [scientific]
genius is someone who develops an idea for which there is, or at least
seems to be, no precedent.
Creativity, however, can exist in a lesser being. Forget Picasso,
just consider an ordinary artist who sees a new style of bold,
imaginative painting. The artist has certain inborn or learned
measures of artistic merit: color harmony, representational accuracy,
vividness, brush technique, etc. He evaluates the new painting and
finds that it exists in a part of his artistic "parameter space" that
he has never explored. He is excited, and carefully studies the
painting for clues as to the techniques that were used. He
hypothesizes rules for creating similar visual effects, trys them out,
modifies them, iterates, adds additional constraints (yes, but can I
do it with just rectangles ...), etc. This is creativity. Nothing
that I have said above precludes our artist from being a machine.
Another example, which I believe I heard from a recent Stanford Ph.D.
(sorry, can't remember who): consider Solomon's famous decision.
Everyone knows that a dispute over property can often be settled by
dividing the property, providing that the value of the property is not
destroyed by the act of division. Solomon's creative decision
involved the realization (at least, we hope he realized it) that in a
particular case, if the rule was implemented in a particular
theatrical manner, the precondition could be ignored and the rule
would still achieve its goal. We can then imagine Solomon to be a
rule-based system with a metasystem that is constantly checking for
generalizations, specializations, and heuristic shortcuts to the
normal rule sequences. I think that Doug Lenat's EURISKO program has
something of this flavor, as do other learning programs.
In the limit, we can imagine a system with nearly infinite computing
power that builds models of its environment in its memory. It carries
out experiments on this model, and verifies the experiments by
carrying them out in the real world when it can. It can solve
ordinary problems through various applicable rule invocations,
unifications, planning, etc. Problems requiring creativity can often
be solved by applying inappropriate rules and techniques (i.e.,
violating their preconditions) just to see what will happen --
sometimes it will turn out that the preconditions were unnecessarily
strict. [The system I have just described is a fair approximation to
a human -- or even to a monkey, dog, or elephant.]
True genius in such a system would require that it construct new
paradigms of thought and problem solving. This will be much more
difficult, but I don't doubt that we and our cybernetic offspring will
even be able to construct such progeny someday.
-- Ken Laws ]
------------------------------
End of AIList Digest
********************
∂19-Sep-83 1143 REGES@SU-SCORE.ARPA Research support for new PhD students
Received: from SU-SCORE by SU-AI with TCP/SMTP; 19 Sep 83 11:43:05 PDT
Date: Mon 19 Sep 83 11:43:44-PDT
From: Stuart Reges <REGES@SU-SCORE.ARPA>
Subject: Research support for new PhD students
To: faculty@SU-SCORE.ARPA
Office: Margaret Jacks 260, 497-9798
I know that many of you have already looked over the applications of the new
PhD students and selected students you are willing to support. About half of
the students are currently supported.
I have volunteered to try to match up students with PIs in the next few weeks.
If any of you still have openings for RAs, please let me know how many you can
support, what kind of background the students should have, etc. Amy Atkinson
has their RA applications if any of you are interested in glancing over them.
If anyone is thinking of taking on some Masters students as RAs, I would suggest
posting a BBOARD message about it. I am not talking about the MS-AI students.
Carole Miller and someone in Robotics are doing that, I believe. I have
applications for many eager Master's students if anyone wants to look at them.
By the way, does anyone have a place for Bing-Chao Huang? He is a PhD student
looking for support this Fall. He just got an A+ on his programming project.
If anyone thinks they might have something for him, please let me know.
-------
∂19-Sep-83 1751 LAWS@SRI-AI.ARPA AIList Digest V1 #59
Received: from SRI-AI by SU-AI with TCP/SMTP; 19 Sep 83 17:48:20 PDT
Date: Monday, September 19, 1983 4:16PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #59
To: AIList@SRI-AI
AIList Digest Tuesday, 20 Sep 1983 Volume 1 : Issue 59
Today's Topics:
Programming Languages - Micro LISP Reviews,
Machine Translation - Ada & Dictionary Request & Grammar Translation,
AI Journals - Addendum,
Bibliography - SNePS Research Group
----------------------------------------------------------------------
Date: Mon, 19 Sep 1983 11:41 EDT
From: WELD%MIT-OZ@MIT-MC
Subject: Micro LISPs
For a survey of micro LISPs see the August and Sept issues of
Microsystems magazine. The Aug issue reviews muLISP, Supersoft LISP
and The Stiff Upper Lisp. I believe that the Sept issue will continue
the survey with some more reviews.
Dan
------------------------------
Date: 14 Sep 83 1:44:58-PDT (Wed)
From: decvax!genrad!mit-eddie!barmar @ Ucb-Vax
Subject: Re: Translation into Ada: Request for Info
Article-I.D.: mit-eddi.713
I think the reference to the WWMCS conversion effort is a bad example
when talking aboutomatic programming language translation. I would be
very surprised if WWMCS is written in a high-level language. It runs
on Honeywell GCOS machines, I believe, and I think that GCOS system
programming is traditionally done in GMAP (GCOS Macro Assembler
Program), especially at the time that WWMCS was written. Only a
masochist would even think of writing an automatic "anticompiler" (I
have heard of uncompilers, but those are usually restricted to
figuring out the code produced by a known compiler, not arbitrary
human coding); researchers have found it hard enough to teach
computers to "understand" programs in HLLs, and it is often pretty
difficult for humans to understand others' assembler code.
--
Barry Margolin
ARPA: barmar@MIT-Multics
UUCP: ..!genrad!mit-eddie!barmar
------------------------------
Date: Mon 19 Sep 83 14:56:49-CDT
From: Werner Uhrig <CMP.WERNER@UTEXAS-20.ARPA>
Subject: Request for m/c-readable foreign language dictionary info
I am looking for foreign-language dictionaries in machine-readable
form. Of particular interest would be a subset containing
EDP-terminology. This would be used to help automate translation of
computer-related technical materials.
Of major interest are German, Spanish, French, but others might be
useful also.
Any pointers appreciated.
Werner (UUCP: ut-ngp!werner or ut-ngp!utastro!werner
via: { decvax!eagle , ucbvax!nbires , gatech!allegra!eagle ,
ihnp4 }
ARPA: werner@utexas-20 or werner@utexas-11 )
------------------------------
Date: 19 Sep 1983 0858-PDT
From: PAZZANI at USC-ECL
Subject: Parsifal
I have a question about PARSIFAL (Marcus's deterministic parser) that
I hope someone can answer:
Is it easy (or possible) to convert grammar rules to the kind of rules
that Parsifal uses? Is there an algoritm to do so?
(i.e., by grammar rule, I mean things like:
S -> NP VP
VP -> VP2 NP PP
VP -> V3 INF
INF -> to VP
etc.
where by grammar rule Marcus means things like
{RULE MAJOR-DECL-S in SS-START
[=np][=verb]-->
Label c decl,major.
Deactivate ss-start. Activate parse-subj.}
{RULE UNMARKED-ORDER IN PARSE-SUBJ
[=np][=verb]-->
Attach 1st to c as np.
Deactivate Parse-subj. Activate parse-aux.}
Thanks in advance,
Mike Pazzani
Pazzani@usc-ecl
------------------------------
Date: 16 Sep 83 16:58:30-PDT (Fri)
From: ihnp4!cbosgd!cbscc!cbscd5!lvc @ Ucb-Vax
Subject: addendum to AI journal list
Article-I.D.: cbscd5.589
The following are journals that readers have sent me since the time I
posted the list of AI journals. As has been pointed out, individuals
can get subscriptions at a reduced rate. Most of the prices I quoted
were the institutional price.
The American Journal of Computational Linguistics -- will now be called ->
Computational Linguistics
Subscription $15
Don Walker, ACL
SRI International
Menlo Park, CA 94025.
------------------------------
Cognition and Brain Theory
Lawrence Erlbaum Associates, Inc.
365 Broadway,
Hillsdale, New Jersey 07642
$18 Individual $50 Institutional
Quarterly
Basic cognition, proposed models and discussion of
consciousness and mental process, epistemology - from frames to
neurons, as related to human cognitive processes. A "fringe"
publication for AI topics, and a good forum for issues in cognitive
science/psychology.
------------------------------
New Generation Computing
Springer-Verlag New York Inc.
Journal Fulfillment Dept.
44 Hartz Way
Secaucus, NJ 07094
A quarterly English-language journal devoted to international
research on the fifth generation computer. [It seems to be
very strong on hardware and logic programming.]
1983 - 2 issues - $52. (Sample copy free.)
1984 - 4 issues - $104.
Larry Cipriani
cbosgd!cbscd5!lvc
------------------------------
Date: 16 Sep 1983 10:38:57-PDT
From: shapiro%buffalo-cs@UDel-Relay
Subject: Your request for bibliographies
Bibliography
SNeRG: The SNePS Research Group
Department of Computer Science
State University of New York at Buffalo
Amherst, New York 14226
Copies of Departmental Technical Reports (marked with an "*")
should be requested from The Library Committee, Dept. of Computer
Science, SUNY/Buffalo, 4226 Ridge Lea Road, Amherst, NY 14226.
Businesses are asked to enclose $3.00 per report requested with their
requests. Others are asked to enclose $1.00 per report.
Copies of papers other than Departmental Technical Reports may be
requested directly from Prof. Stuart C. Shapiro at the above address.
1. Shapiro, S. C. [1971] A net structure for semantic
information storage, deduction and retrieval. Proc. Second
International Joint Conference on Artificial Intelligence,
William Kaufman, Los Altos, CA, 212-223.
2. Shapiro, S. C. [1972] Generation as parsing from a network
into a linear string. American Journal of Computational
Linguistics, Microfiche 33, 42-62.
3. Shapiro, S. C. [1976] An introduction to SNePS (Semantic Net
Processing System). Technical Report No. 31, Computer
Science Department, Indiana University, Bloomington, IN, 21
pp.
4. Shapiro, S. C. and Wand, M. [1976] The Relevance of
Relevance. Technical Report No. 46, Computer Science
Department, Indiana University, Bloomington, IN, 21pp.
2. Bechtel, R. and Shapiro, S. C. [1976] A logic for semantic
networks. Technical Report No. 47, Computer Science
Department, Indiana University, Bloomington, IN, 29pp.
6. Shapiro, S. C. [1977] Representing and locating deduction
rules in a semantic network. Proc. Workshop on
Pattern-Directed Inference Systems. SIGART Newsletter, 63
14-18.
7. Shapiro, S. C. [1977] Representing numbers in semantic
networks: prolegomena Proc. 2th International Joint
Conference on Artificial Intelligence, William Kaufman, Los
Altos, CA, 284.
8. Shapiro, S. C. [1977] Compiling deduction rules from a
semantic network into a set of processes. Abstracts of
Workshop on Automatic Deduction, MIT, Cambridge, MA.
(Abstract only), 7pp.
9. Shapiro, S. C. [1978] Path-based and node-based inference in
semantic networks. In D. Waltz, ed. TINLAP-2: Theoretical
Issues in Natural Languages Processing. ACM, New York,
219-222.
10. Shapiro, S. C. [1979] The SNePS semantic network processing
system. In N. V. Findler, ed. Associative Networks: The
Representation and Use of Knowledge by Computers. Academic
Press, New York, 179-203.
11. Shapiro, S. C. [1979] Generalized augmented transition
network grammars for generation from semantic networks.
Proc. 17th Annual Meeting of the Association for
Computational Linguistics. University of California at San
Diego, 22-29.
12. Shapiro, S. C. [1979] Numerical quantifiers and their use in
reasoning with negative information. Proc. Sixth
International Joint Conference on Artificial Intelligence,
William Kaufman, Los Altos, CA, 791-796.
13. Shapiro, S. C. [1979] Using non-standard connectives and
quantifiers for representing deduction rules in a semantic
network. Invited paper presented at Current Aspects of AI
Research, a seminar held at the Electrotechnical Laboratory,
Tokyo, 22pp.
14. * McKay, D. P. and Shapiro, S. C. [1980] MULTI: A LISP Based
Multiprocessing System. Technical Report No. 164, Department
of Computer Science, SUNY at Buffalo, Amherst, NY, 20pp.
(Contains appendices not in LISP conference version)
12. McKay, D. P. and Shapiro, S. C. [1980] MULTI - A LISP based
multiprocessing system. Proc. 1980 LISP Conference, Stanford
University, Stanford, CA, 29-37.
16. Shapiro, S. C. and McKay, D. P. [1980] Inference with
recursive rules. Proc. First Annual National Conference on
Artificial Intelligence, William Kaufman, Los Altos, CA,
121-123.
17. Shapiro, S. C. [1980] Review of Fahlman, Scott. NETL: A
System for Representing and Using Real-World Knowledge. MIT
Press, Cambridge, MA, 1979. American Journal of
Computational Linguistics 6, 3, 183-186.
18. McKay, D. P. [1980] Recursive Rules - An Outside Challenge.
SNeRG Technical Note No. 1, Department of Computer Science,
SUNY at Buffalo, Amherst, NY, 11pp.
19. * Maida, A. S. and Shapiro, S. C. [1981] Intensional
concepts in propositional semantic networks. Technical
Report No. 171, Department of Computer Science, SUNY at
Buffalo, Amherst, NY, 69pp.
20. * Shapiro, S. C. [1981] COCCI: a deductive semantic network
program for solving microbiology unknowns. Technical Report
No. 173, Department of Computer Science, SUNY at Buffalo,
Amherst, NY, 24pp.
21. * Martins, J.; McKay, D. P.; and Shapiro, S. C. [1981]
Bi-directional Inference. Technical Report No. 174,
Department of Computer Science, SUNY at Buffalo, Amherst,
NY, 32pp.
22. * Martins, J., and Shapiro, S. C. [1981] A Belief Revision
System Based on Relevance Logic and Heterarchical Contexts.
Technical Report No. 172, Department of Computer Science,
SUNY at Buffalo, Amherst, NY, 42pp.
23. Shapiro, S. C. [1981] Summary of Scientific Progress. SNeRG
Technical Note No. 3, Department of Computer Science, SUNY
at Buffalo, Amherst, NY, 2pp.
24. Mckay, D. P. and Martins, J. SNePSLOG User's Manual. SNeRG
Technical Note No. 4, Department of Computer Science, SUNY
at Buffalo, Amherst, NY, 8pp.
22. McKay, D. P.; Shubin, H.; and Martins, J. [1981] RIPOFF:
Another Text Formatting Program. SNeRG Technical Note No. 2,
Department of Computer Science, SUNY at Buffalo, Amherst,
NY, 18pp.
26. * Neal, J. [1981] A Knowledge Engineering Approach to
Natural Language Understanding. Technical Report No. 179,
Computer Science Department, SUNY at Buffalo, Amherst, NY,
67pp.
27. * Srihari, R. [1981] Combining Path-based and Node-based
Reasoning in SNePS. Technical Report No. 183, Department of
Computer Science, SUNY at Buffalo, Amherst, NY, 22pp.
28. McKay, D. P.; Martins, J.; Morgado, E.; Almeida, M.; and
Shapiro, S. C. [1981] An Assessment of SNePS for the Navy
Domain. SNeRG Technical Note No. 6, Department of Computer
Science, SUNY at Buffalo, Amherst, NY, 48pp.
29. Shapiro, S. C. [1981] What do Semantic Network Nodes
Represent? SNeRG Technical Note No. 7, Department of
Computer Science, SUNY at Buffalo, Amherst, NY, 12pp.
Presented at the workshop on Foundational Threads in Natural
Language Processing, SUNY at Stony Brook.
30. McKay, D. P., and Shapiro, S. C. [1981] Using active
connection graphs for reasoning with recursive rules.
Proceedings of the Seventh International Joint Conference on
Artificial Intelligence, William Kaufman, Los Altos, CA,
368-374.
31. Shapiro, S. C. and The SNePS Implementation Group [1981]
SNePS User's Manual. Department of Computer Science, SUNY at
Buffalo, Amherst, NY, 44pp.
32. Shapiro, S. C.; McKay, D. P.; Martins, J.; and Morgado, E.
[1981] SNePSLOG: A "Higher Order" Logic Programming
Language. SNeRG Technical Note No. 8, Department of Computer
Science, SUNY at Buffalo, Amherst, NY, 16pp. Presented at
the Workshop on Logic Programming for Intelligent Systems,
R.M.S. Queen Mary, Long Beach, CA.
33. * Shubin, H. [1981] Inference and Control in Multiprocessing
Environments. Technical Report No. 186, Department of
Computer Science, SUNY at Buffalo, Amherst, NY, 26pp.
34. Shapiro, S. C. [1982] Generalized Augmented Transition
Network Grammars for Generation from Semantic Networks. The
American Journal of Computational Linguistics 8, 1 (January
- March), 12-22.
32. Almeida, M.J. [1982] NETP2 - A Parser for a Subset of
English. SNERG Technical Note No. 9, Department of Computer
Science, SUNY at Buffalo, Amherst, NY, 32pp.
36. * Tranchell, L.M. [1982] A SNePS Implementation of KL-ONE,
Technical Report No. 198, Department of Computer Science,
SUNY at Buffalo, Amherst, NY, 21pp.
37. Shapiro, S.C. and Neal, J.G. [1982] A Knowledge engineering
Approach to Natural language understanding. Proceedings of
the 20th Annual Meeting of the Association for Computational
Linguistics, ACL, Menlo Park, CA, 136-144.
38. Donlon, G. [1982] Using Resource Limited Inference in SNePS.
SNeRG Technical Note No. 10, Department of Computer Science,
SUNY at Buffalo, Amherst, NY, 10pp.
39. Nutter, J. T. [1982] Defaults revisited or "Tell me if
you're guessing". Proceedings of the Fourth Annual
Conference of the Cognitive Science Society, Ann Arbor, MI,
67-69.
40. Shapiro, S. C.; Martins, J.; and McKay, D. [1982]
Bi-directional inference. Proceedings of the Fourth Annual
Meeting of the Cognitive Science Society, Ann Arbor, MI,
90-93.
41. Maida, A. S. and Shapiro, S. C. [1982] Intensional concepts
in propositional semantic networks. Cognitive Science 6, 4
(October-December), 291-330.
42. Martins, J. P. [1983] Belief revision in MBR. Proceedings of
the 1983 Conference on Artificial Intelligence, Rochester,
MI.
43. Nutter, J. T. [1983] What else is wrong with non-monotonic
logics?: representational and informational shortcomings.
Proceedings of the Fifth Annual Meeting of the Cognitive
Science Society, Rochester, NY.
44. Almeida, M. J. and Shapiro, S. C. [1983] Reasoning about the
temporal structure of narrative texts. Proceedings of the
Fifth Annual Meeting of the Cognitive Science Society,
Rochester, NY.
42. * Martins, J. P. [1983] Reasoning in Multiple Belief Spaces.
Ph.D. Dissertation, Technical Report No. 203, Computer
Science Department, SUNY at Buffalo, Amherst, NY, 381 pp.
46. Martins, J. P. and Shapiro, S. C. [1983] Reasoning in
multiple belief spaces. Proceedings of the Eighth
International Joint Conference on Artificial Intelligence,
William Kaufman, Los Altos, CA, 370-373.
47. Nutter, J. T. [1983] Default reasoning using monotonic
logic: a modest proposal. Proceedings of The National
Conference on Artificial Intelligence, William Kaufman, Los
Altos, CA, 297-300.
------------------------------
End of AIList Digest
********************
∂20-Sep-83 1045 ELYSE@SU-SCORE.ARPA Faculty Meeting Next Week
Received: from SU-SCORE by SU-AI with TCP/SMTP; 20 Sep 83 10:45:27 PDT
Date: Tue 20 Sep 83 10:05:40-PDT
From: Elyse Krupnick <ELYSE@SU-SCORE.ARPA>
Subject: Faculty Meeting Next Week
To: Faculty@SU-SCORE.ARPA
cc: YM@SU-AI.ARPA, OP@SU-AI.ARPA
Stanford-Phone: (415) 497-9746
Tentative Agenda-September 27 Meeting
(To be held at 1:15-3:15 in Boystown Conference Room.)
1. Promotion of Jussi Ketonen to Senior Research Associate. Recommended
by John McCarthy. Ketonen C.V. to be distributed this week.
2. The appointment of Leo Guibas as a Consulting Associate Professor
for 83-84.
3. Consideration of the Computer Usage Policy drafted by Jeff Ullman.
Copies to be distributed this week.
*If you are interested in bringing up any new topics be sure to send me a
message to that effect and please send any agenda items and supporting
materials to me ASAP.
**We will be meeting once a month on the 1st Tuesday at 2:30 pm in room 252
of MJH. Please note on your calendar that there will be a meeting on the
following dates:
Oct. 4, Nov. 1, and Dec. 6.
-------
∂20-Sep-83 1121 LAWS@SRI-AI.ARPA AIList Digest V1 #60
Received: from SRI-AI by SU-AI with TCP/SMTP; 20 Sep 83 11:19:24 PDT
Date: Tuesday, September 20, 1983 9:41AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #60
To: AIList@SRI-AI
AIList Digest Tuesday, 20 Sep 1983 Volume 1 : Issue 60
Today's Topics:
AI Journals - AI Journal Changes,
Applications - Cloud Data & AI and Music,
Games - Go Tournament,
Intelligence - Turing test & Definitions
----------------------------------------------------------------------
Date: Mon, 19 Sep 83 18:51 PDT
From: Bobrow.PA@PARC-MAXC.ARPA
Subject: News about the Artificial Intelligence Journal
Changes in the Artificial Intelligence Journal
Daniel G. Bobrow (Editor-in-chief)
There have been a number of changes in the Artificial Intelligence
Journal which are of interest to the AI community.
1) The size of the journal is increasing. In 1982, the journal was
published in two volumes of three issues each (about 650 printed
pages per year). In 1983, we increased the size to two volumes of
four issues each (about 900 printed pages per year). In order to
accomodate the increasing number of high quality papers that are
being submitted to the journal, in 1984 the journal will be published
in three volumes of three issues each (about 1000 printed pages per
year).
2) Despite the journal size increase, North Holland will maintain the
current price of $50 per year for personal subscriptions for
individual (non-institutuional) members of major AI organizations
(e.g. AAAI, SIGART). To obtain such a subscription, members of such
organizations should send a copy of their membership acknowledgement,
and their check for $50 (made out to Artificial Intelligence) to:
Elsevier Science Publishers
Attn: John Tagler
52 Vanderbilt Avenue
New York, New York 10017
North Holland (Elsevier) will acknowledge receipt of the request for
subscription, provide information about which issues will be included
in your subscription, and when they should arrive. Back issues are
not available at the personal rate.
3) The AIJ editorial board has recognized the need for good review
articles in subfields of AI. To encourage the writing of such
articles, an honorarium of $1000 will be awarded the authors of any
review accepted by the journal. Although review papers will go
through the usual review process, when accepted they will be given
priority in the publication queue. Potential authors are reminded
that review articles are among the most cited articles in any field.
4) The publication process takes time. To keep an even flow of
papers in the journal, we must maintain a queue of articles of about
six months. To allow people to know about important research results
before articles have been published, we will lists of papers accepted
for publication in earlier issues of the journal, and make such lists
available to other magazines (e.g. AAAI magazine, SIGART news).
5) New book review editor: Mark Stefik has taken the job of book
review editor for the Artificial Intelligence Journal. The following
note from Mark describes his plans to make the book review section
much more active than it has been in the past.
------------------
The Book Review Section of the Artificial Intelligence Journal
Mark Stefik - Book Review Editor
I am delighted for this opportunity to start an active review column
for AI, and invite your suggestions and participation.
This is an especially good time to review work in artificial
intelligence. Not only is there a surge of interest in AI, but there
are also many new results and publications in computer science, in
the cognitive sciences and in other related sciences. Many new
projects are just beginning and finding new directions (e.g., machine
learning, computational linguistics), new areas of work are opening
up (e.g., new architectures), and others are reporting on long term
projects that are maturing (computer vision). Some readers will want
to track progress in specialized areas; others will find inspiration
and direction from work breaking outside the field. There is enough
new and good but unreviewed work that I would like to include two or
three book reviews in every issue of Artificial Intelligence.
I would like this column of book reviews to become essential
reading for the scientific audience of this journal. My goal is to
cover both scientific works and textbooks. Reviews of scientific
work will not only provide an abstract of the material, but also show
how it fits into the body of existing work. Reviews of textbooks
will discuss not only clarity and scope, but also how well the
textbook serves for teaching. For controversial work of major
interest I will seek more than one reviewer.
To get things started, I am seeking two things from the
community now. First, suggestions of books for review. Books
written in the past five years or so will be considered. The scope
of the fields considered will be broad. The main criteria will be
scientific interest to the readership. For example, books from as
far afield as cultural anthropology or sociobiology will be
considered if they are sufficiently relevent, and readable by an AI
audience. Occasionally, important books intended for a popular
audience will also be considered.
My second request is for reviewers. I will be asking
colleagues for reviews of particular books, but will also be open
both to volunteers and suggestions. Although I will tend to solicit
reviews from researchers of breadth and maturity, I recognize that
graduate students preparing theses are some of the best read people
in specialized areas. For them, reviews in Artificial Intelligence
will be a good way to to share the fruits of intensive reading in
thesis preparation, and also to achieve some visibility. Reviewers
will receive a personal copy of the book reviewed.
Suggestions will reach me at the following address.
Publishers should send two copies of works to be reviewed.
Mark Stefik
Knowledge Systems Area
Xerox Palo Alto Research Center
3333 Coyote Hill Road
Palo Alto, California 94304
ARPANET Address: STEFIK@PARC
------------------------------
Date: Mon, 19 Sep 83 17:09:09 PDT
From: Alex Pang <v.pang@UCLA-LOCUS>
Subject: help on satellite image processing
I'm planning to do some work on cloud formation prediction
based either purely on previous cloud formations or together with some
other information - e.g. pressure, humidity, wind, etc. Does anyone
out there know of any existing system doing any related stuff on this,
and if so, how and where I can get more information on it. Also, do
any of you know where I can get satellite data with 3D cloud
information?
Thank you very much.
alex pang
------------------------------
Date: 16 Sep 83 22:26:21 EDT (Fri)
From: Randy Trigg <randy%umcp-cs@UDel-Relay>
Subject: AI and music
Speaking of creativity and such, I've had an interest in AI and music
for some time. What I'd like is any pointers to companies and/or
universities doing work in such areas as cognitive aspects of
appreciating and creating music, automated music analysis and
synthesis, and "smart" aids for composers and students.
Assuming a reasonable response, I'll post results to the AIList.
Thanks in advance.
Randy Trigg
...!seismo!umcp-cs!randy (Usenet)
randy.umcp-cs@udel-relay (Arpanet)
------------------------------
Date: 17 Sep 83 23:51:40-PDT (Sat)
From: harpo!utah-cs!utah-gr!thomas @ Ucb-Vax
Subject: Re: Go Tournament
Article-I.D.: utah-gr.908
I'm sure we could find some time on one of our Vaxen for a Go
tournament. If you're writing it on some other machine, make sure it
is portable.
=Spencer
------------------------------
Date: Fri 16 Sep 83 20:07:31-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Turing test
It was once playfully proposed to permute the actors in the classical
definition of the Turing test, and thus define an intelligent entity
as one that can tell the difference between a human and a (deceptively
programmed) computer. May have been prompted by the well-known
incident involving Eliza. The result is that, as our AI systems get
better, the standard for intelligence will increase. This definition
may even enable some latter-day Goedel to prove mathematically that
computers can never be intelligent!
- Richard :-)
------------------------------
Date: Fri, 16 Sep 83 19:36:53 PDT
From: harry at lbl-nmm
Subject: Psychology and Artificial Intelligence.
Members of this list might find it interesting to read an article ``In
Search of Unicorns'' by M. A. Boden (author of ``Artificial
Intelligence and Natural Man'') in The Sciences (published by the New
York Academy of Sciences). It discusses the `computational style' in
theoretical psychology. It is not a technical article.
Harry Weeks
------------------------------
Date: 15 Sep 83 17:10:04-PDT (Thu)
From: ihnp4!arizona!robert @ Ucb-Vax
Subject: Another Definition of Intelligence
Article-I.D.: arizona.4675
A problem that bothers me about the Turing test is having to
provoke the machine with such specific questioning. So jumping ahead
a couple of steps, I would accept a machine as an adequate
intelligence if it could listen to a conversation between other
intelligences, and be able to interject at appropriate points such
that these others would not be able to infer the mechanical aspect of
this new source. Our experiences with human intelligence would make
us very suspicous of anyone or anything that sits quietly without new,
original, or synthetic comments while being within a environment of
discussion.
And then to fully qualify, upon overhearing these discussions
over net, I'd expect it to start conjecturing on the question of
intelligence, produce its own definition, and then start sending out
any feelers to ascertain if there is anything out there qualifying
under its definition.
------------------------------
Date: 16 Sep 83 23:11:08-PDT (Fri)
From: decvax!linus!philabs!seismo!rlgvax!cvl!umcp-cs!speaker @ Ucb-Vax
Subject: Re: Another Definition of Intelligence
Article-I.D.: umcp-cs.2608
Finally, someone has come up with a fresh point of view in an
otherwise stale discussion!
Arizona!robert suggests that a machine could be classified as
intelligent if it can discern intelligence within its environment, as
opposed to being prodded into displaying intelligence. But how can we
tell if the machine really has a discerning mind? Does it get
involved in an interesting conversation and respond with its own
ideas? Perhaps it just sits back and says nothing, considering the
conversation too trivial to participate in.
And therein lies the problem with this idea. What if the machine
doesn't feel compelled to interact with its environment? Is this a
sign of inability, or disinterest? Possibly disinterest. A machine
mind might not be interested in its environment, but in its own
thoughts. Its own thoughts ARE its environment. Perhaps its a sign
of some mental aberration. I'm sure that sufficiently intelligent
machines will be able to develop all sorts of wonderfully neurotic
patterns of behavior.
I know. Let's build a machine with only a console for an output
device and wait for it to say, "Hey, anybody intelligent out there?"
"You got any VAXEN out there?"
- Speaker
-- Full-Name: Speaker-To-Animals
Csnet: speaker@umcp-cs
Arpa: speaker.umcp-cs@UDel-Relay
------------------------------
Date: 17 Sep 83 19:17:21-PDT (Sat)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!speaker @ Ucb-Vax
Subject: Life, don't talk to me about life....
Article-I.D.: umcp-cs.2628
From: jpj@mss
Subject: Re: Another Definition of Intelligence
To: citcsv!seismo!rlgvax!cvl!umcp-cs!speaker
I find your notion of an artificial intelligence sitting
back, taking in all that goes on around it, but not being
motivated to comment (perhaps due to boredom) an amusing
idea. Have you read "The Restaurant at the End of the
Universe?" In that story is a most entertaining ai - a
chronically depressed robot (whos name escapes me at the
moment - I don't have my copy at hand) who thinks so much
faster than all the mortals around it that it is always
bored and *feels* unappreciated. (Sounds like some of my
students!)
Ah yes, Marvin the paranoid android. "Here I am, brain the size of a
planet and all they want me to do is pick up a peice of paper."
This is really interesting. You might think that a robot with such a
huge intellect would also develop an oversized ego... but just the
reverse could be true. He thinks so fast and so well that he becomes
bored and disgusted with everything around himself... so he withdraws
and wishes his boredom and misery would end.
I doubt Adams had this in mind when he wrote the book, but it fits
together nicely anyway.
--
- Speaker
speaker@umcp-cs
speaker.umcp-cs@UDel-Relay
------------------------------
End of AIList Digest
********************
∂20-Sep-83 1735 GOLUB@SU-SCORE.ARPA Faculty meetings
Received: from SU-SCORE by SU-AI with TCP/SMTP; 20 Sep 83 17:35:29 PDT
Date: Tue 20 Sep 83 17:35:57-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Faculty meetings
To: faculty@SU-SCORE.ARPA
cc: YM@SU-AI.ARPA, op@SU-AI.ARPA, elyse@SU-SCORE.ARPA
The meetings on Tuesday Oct 4, Nov 1 and Dec 6 referred to by
Elyse at the end of her message are for tenured faculty members.
GENE
-------
∂20-Sep-83 1757 GOLUB@SU-SCORE.ARPA Appointment
Received: from SU-SCORE by SU-AI with TCP/SMTP; 20 Sep 83 17:57:22 PDT
Date: Tue 20 Sep 83 17:57:38-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Appointment
To: faculty@SU-SCORE.ARPA
cc: Su-bboards@SU-SCORE.ARPA
I'm very pleased to announce that the Board of Trustees has approved
the appointment of Rod Brooks. We're happy to have you on the Faculty,
Rod.
Gene Golub
-------
∂21-Sep-83 0837 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #28
Received: from SU-SCORE by SU-AI with TCP/SMTP; 21 Sep 83 08:37:06 PDT
Date: Tuesday, September 20, 1983 7:03PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #28
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Wednesday, 21 Sep 1983 Volume 1 : Issue 28
Today's Topics:
Representation - Transitive Closures,
Puzzle - Solution to Truthteller
----------------------------------------------------------------------
Date: 17 September 1983 1301-PDT (Saturday)
From: Abbott at AeroSpace ( Russ Abbott )
Subject: Transitive Closures
The recent discussion of transitive relations has drifted somewhat
from the original question. I thought this might be a good time to
raise it again.
Does anyone know of a good way to write a predicate in Prolog that
defines a relation to be the transitive closure of another relation.
For example:
transitive←closure(R, T←R).
should have the side-effect of ensuring that if R(a, b) and R(b, c)
are either in the database or ( important! ) are added to the database
later, then T←R(X, Y) will succeed with (X, Y) being bound to (a, b),
(b, c), and (a, c) in turn.
One might imagine writing something like:
transitive←closure(R, T←R) :-
assert((
T←R(A, C) :-
R(A, C);
T←R(A, B), T←R(B, C)
)).
But that leads to various difficulties involving infinite recursion.
So far, there are no clean solutions.
------------------------------
Date: Mon 19 Sep 83 02:25:41-PDT
From: Motoi Suwa <Suwa@Sumex-AIM>
Subject: Puzzle Solution
Date: 14 Sep. 1983
From: K.Handa ETL Japan
Subject: Another Puzzle Solution
This is the solution of Alan's puzzle introduced on 24 Aug.
?-go(10).
will display the ten disgit number as following:
-->6210001000
and
?-go(4).
will:
-->1210
-->2020
I found following numbers:
6210001000
521001000
42101000
3211000
21200
1210
2020
The Following is the total program ( DEC10 Prolog Ver.3 )
/*** initial assertion ***/
init(D):- ass←xn(D),assert(rest(D)),!.
ass←xn(0):- !.
ass←xn(D):- D1 is D-1,asserta(x(D1,←)),asserta(n(D1)),ass←xn(D1).
/*** main program ***/
go(D):- init(D),guess(D,0).
go(←):- abolish(x,2),abolish(n,1),abolish(rest,1).
/* guess 'N'th digit */
guess(D,D):- result,!,fail.
guess(D,N):- x(N,X),var(X),!,n(Y),N=<Y,N*Y=<D,ass(N,Y),set(D,N,Y),
N1 is N+1,guess(D,N1).
guess(D,N):- x(N,X),set(D,N,X),N1 is N+1,guess(D,N1).
/* let 'N'th digit be 'X' */
ass(N,X):- only(retract(x(N,←))),asserta(x(N,X)),only(update(1)).
ass(N,←):- retract(x(N,←)),asserta(x(N,←)),update(-1),!,fail.
only(X):- X,!.
/* 'X' 'N's appear in the sequence of digit */
set(D,N,X):- count(N,Y),rest(Z),!,Y=<X,X=<Y+Z,X1 is X-Y,set1
(D,N,X1,0).
set1(←,N,0,←):- !.
set1(D,N,X,P):- n(M),P=<M,x(M,Y),var(Y),M*N=<D,ass(M,N),set(D,M,N),
X1 is X-1,P1 is M,set1(D,N,X1,P1).
/* 'X' is the number of digits which value is 'N' */
count(N,X):- bagof(M,M↑(x(M,Z),nonvar(Z),Z=N),L),length(L,X).
count(←,0).
/* update the number of digits which value is not yet assigned */
update(Z):- only(retract(rest(X))),Z1 is X-Z,assert(rest(Z1)).
update(Z):- retract(rest(X)),Z1 is X+Z,assert(rest(Z1)),!,fail.
/* display the result */
result:- print(-->),n(N),x(N,M),print(M),fail.
result:- nl.
------------------------------
End of PROLOG Digest
********************
∂21-Sep-83 1419 rita@su-score CSMS Update
Received: from SU-SHASTA by SU-AI with PUP; 21-Sep-83 14:18 PDT
Received: from Score by Shasta with TCP; Wed Sep 21 13:50:56 1983
Date: Wed 21 Sep 83 13:50:50-PDT
From: Rita Leibovitz <RITA@SU-SCORE.ARPA>
Subject: CSMS Update
To: Admissions@SU-SHASTA.ARPA
Stanford-Phone: (415) 497-4365
This the latest on the CSMS applicants who have accepted our offer.
30-Jun-83 10:01:02-PDT,2767;000000000005
Mail-From: RITA created at 30-Jun-83 10:01:01
Date: Thu 30 Jun 83 10:01:01-PDT
From: Rita Leibovitz <RITA@SU-SCORE.ARPA>
Subject: CSMS AOO
To: rita@SU-SCORE.ARPA
Stanford-Phone: (415) 497-4365
9/20/83 DEPARTMENT OF COMPUTER SCIENCE
CSMS APPLICANTS WHO HAVE ACCEPTED OUR OFFER OF ADMISSION
* * * *
TOTAL: 46 MALE: 37 FEMALE: 9 DEFERRED: 3
LAST FIRST SEX COTERM DEPT. MINORITY COUNTRY
---- ----- --- ------------ -------- -------
ANDERSON ALLAN M
ANDERSON STEVEN M
BENNETT DON M
BERNSTEIN DAVID M
BION JOEL M PHILOSOPHY (defer until 9/84)
BRAWN BARBARA F HISPANIC
CAMPOS ALVARO M CHILE
CHAI SUN-KI M ASIAN
CHEHIRE WADIH M FRANCE/LEBANON
CHEN GORDON M ASIAN
COCHRAN KIMBERLY F
COLE ROBERT M
COTTON TODD M MATH
DICKEY CLEMENT M
ETHERINGTON RICHARD M
GARBAGNATI FRANCESCO M ITALY
GENTILE CLAUDIO M ITALY
GOLDSTEIN MARK M
HARRIS PETER M
HECKERMAN DAVID M
HUGGINS KATHERINE F
JAI HOKIMI BASSIM M FR. MOROCCO
JONSSON BENGT M SWEDEN
JULIAO JORGE M COLOMBIA
LEO YIH-SHEH M CANADA
LEWINSON JAMES M MATH
LOEWENSTEIN MAX M
MARKS STUART M E.E. ASIAN (defer until 4/84)
MUELLER SUZANNE F
MULLER ERIC M FRANCE
PERKINS ROBERT M CHEMISTRY
PERNICI BARBARA F ITALY
PONCELEON DULCE F VENEZUELA
PORAT RONALD M
PROUDIAN DEREK M ENGLISH/COG.SCI
REUS EDWARD M
SCOGGINS JOHN M MATH. SCIENCE
SCOTT KIMBERLY F
VELASCO ROBERTO M PHILIPPINES
VERDONK BRIGITTE F BELGIUM
WENOCUR MICHAEL M
WICKSTROM PAUL M
WU LI-MEI F TAIWAN, R.O.C.
WU NORBERT M ELEC. ENGIN. ASIAN (defer until 9/84)
YOUNG KARL M
YOUNG PAUL M
-------
rita
-------
∂21-Sep-83 1619 GOLUB@SU-SCORE.ARPA Reception
Received: from SU-SCORE by SU-AI with TCP/SMTP; 21 Sep 83 16:18:57 PDT
Date: Wed 21 Sep 83 16:10:44-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Reception
To: faculty@SU-SCORE.ARPA
There will be a reception for the new Ph. D. students
and their student advisers at my house on Friday at 4:30.
It would be nice if some of the faculty could come too.
Please let me know if you can make it.
GENE
-------
∂22-Sep-83 1018 GOLUB@SU-SCORE.ARPA Registration
Received: from SU-SCORE by SU-AI with TCP/SMTP; 22 Sep 83 10:18:25 PDT
Date: Thu 22 Sep 83 10:19:30-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Registration
To: faculty@SU-SCORE.ARPA
Monday and Tuesday are Registration and it is important that you
be available to students sometime during that period.
We also have a faculty meeting on Tuesday at 1:15. Please send
me any supporting materials you might have for agenda items.
GENE
-------
∂22-Sep-83 1847 LAWS@SRI-AI.ARPA AIList Digest V1 #61
Received: from SRI-AI by SU-AI with TCP/SMTP; 22 Sep 83 18:47:28 PDT
Date: Thursday, September 22, 1983 5:15PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #61
To: AIList@SRI-AI
AIList Digest Friday, 23 Sep 1983 Volume 1 : Issue 61
Today's Topics:
AI Applications - Music,
AI at Edinburgh - Request,
Games - Prolog Puzzle Solution,
Seminars - Talkware & Hofstader,
Architectures - Parallelism,
Technical Reports - Rutgers
----------------------------------------------------------------------
Date: 20 Sep 1983 2120-PDT
From: FC01@USC-ECL
Subject: Re: Music in AI
Music in AI - find Art Wink formerly of U. of Pgh. Dept of info sci.
He had a real nice program to imitate Debuse (experts could not tell
its compositions from originals).
------------------------------
Date: 18 Sep 83 12:01:27-PDT (Sun)
From: decvax!dartvax!lorien @ Ucb-Vax
Subject: U of Edinburgh, Scotland Inquiry
Article-I.D.: dartvax.224
Who knows anything about the current status of the Artificial
Intelligence school at the University of Edinburgh? I've heard
they've been through hard times in recent years, what with the
Lighthill report and British funding shakeups, but what has been going
on within the past year or so? I'd appreciate any gossip/rumors/facts
and if anyone knows that they're on the net, their address.
--decvax!dartvax!dartlib!lorien
Lorien Y. Pratt
------------------------------
Date: Mon 19 Sep 83 02:25:41-PDT
From: Motoi Suwa <Suwa@Sumex-AIM>
Subject: Puzzle Solution
[Reprinted from the Prolog Digest.]
Date: 14 Sep. 1983
From: K.Handa ETL Japan
Subject: Another Puzzle Solution
This is the solution of Alan's puzzle introduced on 24 Aug.
?-go(10).
will display the ten disgit number as following:
-->6210001000
and
?-go(4).
will:
-->1210
-->2020
I found following numbers:
6210001000
521001000
42101000
3211000
21200
1210
2020
The Following is the total program ( DEC10 Prolog Ver.3 )
/*** initial assertion ***/
init(D):- ass←xn(D),assert(rest(D)),!.
ass←xn(0):- !.
ass←xn(D):- D1 is D-1,asserta(x(D1,←)),asserta(n(D1)),ass←xn(D1).
/*** main program ***/
go(D):- init(D),guess(D,0).
go(←):- abolish(x,2),abolish(n,1),abolish(rest,1).
/* guess 'N'th digit */
guess(D,D):- result,!,fail.
guess(D,N):- x(N,X),var(X),!,n(Y),N=<Y,N*Y=<D,ass(N,Y),set(D,N,Y),
N1 is N+1,guess(D,N1).
guess(D,N):- x(N,X),set(D,N,X),N1 is N+1,guess(D,N1).
/* let 'N'th digit be 'X' */
ass(N,X):- only(retract(x(N,←))),asserta(x(N,X)),only(update(1)).
ass(N,←):- retract(x(N,←)),asserta(x(N,←)),update(-1),!,fail.
only(X):- X,!.
/* 'X' 'N's appear in the sequence of digit */
set(D,N,X):- count(N,Y),rest(Z),!,Y=<X,X=<Y+Z,X1 is X-Y,set1
(D,N,X1,0).
set1(←,N,0,←):- !.
set1(D,N,X,P):- n(M),P=<M,x(M,Y),var(Y),M*N=<D,ass(M,N),set(D,M,N),
X1 is X-1,P1 is M,set1(D,N,X1,P1).
/* 'X' is the number of digits which value is 'N' */
count(N,X):- bagof(M,M↑(x(M,Z),nonvar(Z),Z=N),L),length(L,X).
count(←,0).
/* update the number of digits which value is not yet assigned */
update(Z):- only(retract(rest(X))),Z1 is X-Z,assert(rest(Z1)).
update(Z):- retract(rest(X)),Z1 is X+Z,assert(rest(Z1)),!,fail.
/* display the result */
result:- print(-->),n(N),x(N,M),print(M),fail.
result:- nl.
------------------------------
Date: 21 Sep 83 1539 PDT
From: David Wilkins <DEW@SU-AI>
Subject: Talkware Seminars
[Reprinted from the SU-SCORE bboard.]
1127 TW Talkware seminar Weds. 2:15
I will be organizing a weekly seminar this fall on a new area I am
currently developing as a research topic: the theory of "talkware".
This area deals with the design and analysis of languages that are
used in computing, but are not programming languages. These include
specification languages, representation languages, command languages,
protocols, hardware description languages, data base query languages,
etc. There is currently a lot of ad hoc but sophisticated practice
for which a more coherent and general framework needs to be developed.
The situation is analogous to the development of principles of
programming languages from the diversity of "coding" languages and
methods that existed in the early fifties.
The seminar will include outside speakers and student presentations of
relevant literature, emphasizing how the technical issues dealt with
in current projects fit into the development talkware theory. It will
meet at 2:15 every Wednesday in Jacks 301. The first meeting will be
Wed. Sept. 28. For a more extensive description, see
{SCORE}<WINOGRAD>TALKWARE or {SAIL}TALKWA[1,TW].
------------------------------
Date: Thu 22 Sep 00:23
From: Jeff Shrager
Subject: Hofstader seminar at MIT
[Reprinted from the CMU-AI bboard.]
Douglas Hofstader is giving a course this semester at MIT. I thought
that the abstract would interest some of you. The first session takes
place today.
------
"Perception, Semanticity, and Statistically Emergent Mentality"
A seminar to be given fall semester by Douglas Hofstader
In this seminar, I will present my viewpoint about the nature
of mind and the goals of AI. I will try to explain (and thereby
develop) my vision of how we perceive the essence of things, filtering
out the details and getting at their conceptual core. I call this
"deep perception", or "recognition".
We will review some earlier projects that attacked some
related problems, but primarily we will be focussing on my own
research projects, specifically: Seek-Whence (perception of sequential
patterns), Letter Spirit (perception of the style of letters), Jumbo
(reshuffling of parts to make "well-chunked" wholes), and Deep Sea
(analogical perception). These tightly related projects share a
central philosophy: that cognition (mentality) cannot be programmed
explicitly but must emerge "epiphenomenally", i.e., as a consequence
of the nondeterministic interaction of many independent "subcognitive"
pieces. Thus the overall "mentality" of such a system is not directly
programmed; rather, it EMERGES as an observable (but onnprogrammed)
phenomenon -- a statistical consequence of many tiny semi-cooperating
(and of course programmed) pieces. My projects all involve certain
notions under development, such as:
-- "activation level": a measure of the estimated relevance of a given
Platonic concept at a given time;
-- "happiness": a measure of how easy it is to accomodate a structure
and its currently accepted Platonic class to each other;
-- "nondeterministic terraced scan": a method of homing in to the best
category to which to assign something;
-- "semanticity": the measure of how abstractly rooted (intensional) a
perception is;
-- "slippability": the ease of mutability of intensional
representational structures into "semantically close" structures;
-- "system temprature": a number measuring how chaotically active the
whole system is.
This strategy for AI is permeated by probabilistic or
statistical ideas. The main idea is that things need not happen in
any fixed order; in fact, that chaos is often the best path to follow
in building up order. One puts faith in the reliability of
statistics: a sensible, coherent total behavior will emerge when there
are enouh small independent events being influenced by high-level
parameters such as temperature, activation levels, happinesses. A
challange is to develop ways such a system can watch its own
activities and use those observations ot evaluate its own progress, to
detect and pull itself out of ruts it chances to fall into, and to
guide itself toward a satisfying outcome.
... Prerequisits: an ability to program well, preferably in
Lisp, and an interest in philosophy of mind and artificial
intelligence.
------------------------------
Date: 18 Sep 83 22:48:56-PDT (Sun)
From: decvax!dartvax!lorien @ Ucb-Vax
Subject: Parallelism et. al.
Article-I.D.: dartvax.229
The Parallelism and AI projects at the University of Maryland sound
very interesting. I agree with an article posted a few days back that
parallel hardware won't necessarily produce any significantly new
methods of computing, as we've been running parallel virtual machines
all along. Parallel hardware is another milestone along the road to
"thinking in parallel", however, getting away from the purely Von
Neumann thinking that's done in the DP world these days. It's always
seemed silly to me that our computers are so serial when our brains:
the primary analogy we have for "thinking machines" are so obviously
parallel mechanisms. Finally we have the technology (software AND
hardware) to follow in our machine architecture cognitive concepts
that evolution has already found most powerful.
I feel that the sector of the Artificial Intelligence community that
pays close attention to psychology and the workings of the human brain
deserves more attention these days, as we move from writing AI
programs that "work" (and don't get me wrong, they work very well!) to
those that have generalizable theoretical basis. One of these years,
and better sooner than later, we'll make a quantum leap in AI research
and articulate some of the fundamental structures and methods that are
used for thinking. These may or may not be isomorphic to human
thinking, but in either case we'll do well to look to the human brain
for inspiration.
I'd like to hear more about the work at the University of Maryland; in
particular the prolog and the parallel-vision projects.
What do you think of the debate between what I'll call the Hofstadter
viewpoint: that we should think long term about the future of
artificial intelligence, and the Feigenbaum credo: that we should stop
philosophizing and build something that works? (Apologies to you both
if I've misquoted)
--Lorien Y. Pratt
decvax!dartvax!lorien
(Dartmouth College)
------------------------------
Date: 18 Sep 83 23:30:54-PDT (Sun)
From: pur-ee!uiucdcs!uiuccsb!cytron @ Ucb-Vax
Subject: AI and architectures - (nf)
Article-I.D.: uiucdcs.2883
Forward at the request of speaker: /***** uiuccsb:net.arch /
umcp-cs!speaker / 12:20 am Sep 17, 1983 */
The fact remains that if we don't have the algorithms for
doing something with current hardware, we still won't be
able to do it with faster or more powerful hardware.
The fact remains that if we don't have any algorithms to start with
then we shouldn't even be talking implementation. This sounds like a
software engineer's solution anyway, "design the software and then
find a CPU to run it on."
New architectures, while not providing a direct solution to a lot of
AI problems, provide the test-bed necessary for advanced AI research.
That's why everyone wants to build these "amazingly massive" parallel
architectures. Without them, AI research could grind to a standstill.
To some extent these efforts change our way of thinking
about problems, but for the most part they only speed up
what we knew how to do already.
Parallel computation is more than just "speeding things up." Some
problems are better solved concurrently.
My own belief is that the "missing link" to AI is a lot of
deep thought and hard work, followed by VLSI implementation
of algorithms that have (probably) been tested using
conventional software running on conventional architectures.
Gad...that's really provincial: "deep thought, hard work, followed by
VLSI implementation." Are you willing to wait a millenia or two while
your VAX grinds through the development and testing of a truly
high-velocity AI system?
If we can master knowledge representation and learning, we
can begin to get away from programming by full analysis of
every part of every algorithm needed for every task in a
domain. That would speed up our progress more than new
architectures.
I agree. I also agree with you that hardware is not in itself a
solution and that we need more thought put to the problems of building
intelligent systems. What I am trying to point out, however, is that
we need integrated hardware/software solutions. Highly parallel
computer systems will become a necessity, not only for research but
for implementation.
- Speaker
-- Full-Name: Speaker-To-Animals
Csnet: speaker@umcp-cs
Arpa: speaker.umcp-cs@UDel-Relay
This must be hell...all I can see are flames... towering flames!
------------------------------
Date: 19 Sep 83 9:36:35-PDT (Mon)
From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: AI and Architecture
Article-I.D.: ncsu.2338
Sheesh. Everyone seems so excited about whether a parallel machine
is or will lead to fundamentally new things. I agree with someone's
comment that conceptually time-sharing and multi-programming have
been conceptually quite parellel "virtual" machines for some time.
Just more and cheaper of the same. Perhaps the added availability
will lead someone to have a good idea or two about how to do
something better -- in that sense it seems certain that something
good will come of proliferation and popularization of parallelism.
But for my money, there is nothing really, fundamentally different.
Unless it is non-determinism. Parallel system tend to be less
deterministic then their simplex brethern, though vast effort are
usually expended in an effort to stamp out this property. Take me
for example: I am VERY non-deterministic (just ask my wife) and yet I
am also smarter then a lot of AI programs. The break thru in AI/Arch
will, in my non-determined opinion, come when people stop trying to
sqeeze paralle systems into the more restricted modes of simplex
systems, and develop new paradigms for how to let such a system spred
its wings in a dimension OTHER THAN performance. From a pragmatic
view, I think this will not happen until people take error recovery
and exception processing more seriously, since there is a fine line
between an error and a new thought ....
----GaryFostel----
------------------------------
Date: 20 Sep 83 18:12:15 PDT (Tuesday)
From: Bruce Hamilton <Hamilton.ES@PARC-MAXC.ARPA>
Reply-to: Hamilton.ES@PARC-MAXC.ARPA
Subject: Rutgers technical reports
This is probably of general interest. --Bruce
From: PETTY@RUTGERS.ARPA
Subject: 1983 abstract mailing
Below is a list of our newest technical reports.
The abstracts for these are available for access via FTP with user
account <anonymous> with any password. The file name is:
<library>tecrpts-online.doc
If you wish to order copies of any of these reports please send mail
via the ARPANET to LOUNGO@RUTGERS or PETTY@RUTGERS. Thank you!!
CBM-TR-128 EVOLUTION OF A PLAN GENERATION SYSTEM, N.S. Sridharan,
J.L. Bresina and C.F. Schmidt.
CBM-TR-133 KNOWLEDGE STRUCTURES FOR A MODULAR PLANNING SYSTEM,
N.S. Sridharan and J.L. Bresina.
CBM-TR-134 A MECHANISM FOR THE MANAGEMENT OF PARTIAL AND
INDEFINITE DESCRIPTIONS, N.S. Sridharan and J.L. Bresina.
DCS-TR-126 HEURISTICS FOR FINDING A MAXIMUM NUMBER OF DISJOINT
BOUNDED BATHS, D. Ronen and Y. Perl.
DCS-TR-127 THE BALANCED SORTING NETWORK,M. Dowd, Y. Perl, L.
Rudolph and M. Saks.
DCS-TR-128 SOLVING THE GENERAL CONSISTENT LABELING (OR CONSTRAINT
SATISFACTION) PROBLEM: TWO ALGORITHMS AND THEIR EXPECTED COMPLEXITIES,
B. Nudel.
DCS-TR-129 FOURIER METHODS IN COMPUTATIONAL FLUID AND FIELD
DYNAMICS, R. Vichnevetsky.
DCS-TR-130 DESIGN AND ANALYSIS OF PROTECTION SCHEMES BASED ON THE
SEND-RECEIVE TRANSPORT MECHANISM, (Thesis) R.S. Sandhu. (If you wish
to order this thesis, a pre-payment of $15.00 is required.)
DCS-TR-131 INCREMENTAL DATA FLOW ANALYSIS ALGORITHMS, M.C. Paull
and B.G. Ryder.
DCS-TR-132 HIGH ORDER NUMERICAL SOMMERFELD BOUNDARY CONDITIONS:
THEORY AND EXPERIMENTS, R. Vichnevetsky and E.C. Pariser.
LCSR-TR-43 NUMERICAL METHODS FOR BASIC SOLUTIONS OF GENERALIZED
FLOW NETWORKS, M. Grigoriadis and T. Hsu.
LCSR-TR-44 LEARNING BY RE-EXPRESSING CONCEPTS FOR EFFICIENT
RECOGNITION, R. Keller.
LCSR-TR-45 LEARNING AND PROBLEM SOLVING, T.M. Mitchell.
LRP-TR-15 CONCEPT LEARNING BY BUILDING AND APPLYING
TRANSFORMATIONS BETWEEN OBJECT DESCRIPTIONS, D. Nagel.
------------------------------
End of AIList Digest
********************
∂22-Sep-83 2332 GOLUB@SU-SCORE.ARPA IBM relations
Received: from SU-SCORE by SU-AI with TCP/SMTP; 22 Sep 83 23:32:11 PDT
Date: Thu 22 Sep 83 23:33:11-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: IBM relations
To: faculty@SU-SCORE.ARPA
I spoke to a representative of IBM yesterday. They are interested in
having better relations with our department. Does anyone have any
imaginative ideas for building better connections between this department
and IBM?
GENE
-------
∂23-Sep-83 0827 @SU-SCORE.ARPA:RINDFLEISCH@SUMEX-AIM.ARPA Re: IBM relations
Received: from SU-SCORE by SU-AI with TCP/SMTP; 23 Sep 83 08:26:59 PDT
Received: from SUMEX-AIM.ARPA by SU-SCORE.ARPA with TCP; Fri 23 Sep 83 08:26:08-PDT
Date: Fri 23 Sep 83 08:22:26-PDT
From: T. C. Rindfleisch <Rindfleisch@SUMEX-AIM.ARPA>
Subject: Re: IBM relations
To: GOLUB@SU-SCORE.ARPA, faculty@SU-SCORE.ARPA
cc: Rindfleisch@SUMEX-AIM.ARPA
In-Reply-To: Your message of Thu 22 Sep 83 23:35:34-PDT
HPP has one research contract with PASC for an expert machine fault
diagnosis system, Gene, and is working on getting funding for another
for intelligent tutoring systems. Some aspects of this work could
involve systems for IBM PC's -- this is one area where IBM seems
especially interested in research and outside software development.
What type/scope of thing did your contact have in mind for "better
relations with the department"?
Tom R.
-------
∂23-Sep-83 0933 cheriton%SU-HNV.ARPA@SU-SCORE.ARPA Re: IBM relations
Received: from SU-SCORE by SU-AI with TCP/SMTP; 23 Sep 83 09:33:26 PDT
Received: from Diablo by Score with Pup; Fri 23 Sep 83 09:34:25-PDT
Date: Fri, 23 Sep 83 09:33 PDT
From: David Cheriton <cheriton@Diablo>
Subject: Re: IBM relations
To: GOLUB@SU-Score, faculty@SU-Score
IBM has money, people and equipment.
From my standpoint, most of the equipment stinks, or is at best uninteresting.
They have made a business of cleaving the computing universe into two pieces:
IBM and everyone else. We are on the "everyone else" side. Any one here
understand EBCIDIC, DASD, DL/1, VTAM, etc.?
WRT money, it is my impression that SPO and IBM have conspired to make
the difficulty of getting IBM research contracts thru Stanford so difficult
that it is almost hopeless unless one has the resources of a grou like HPP,
especially a Tom Rindfleisch. In that vein, I find the people in IBM PASC
(with a few exceptions) disappointing. If IBM could supply someone that
could aid faculty in dealing with Stanford, in particular SPO, to get
contracts through and IBM wanted to support that sort of thing, that
would improve relations.
Finally, there are some good people hidden within IBM. How about proposing
that IBM send some of their good researchers to Stanford as fully supported
research scientists with adequate resources from IBM to do and contribute to
some research project at Stanford. I am thinking of peroids of 1 or 2 years
for people we can mutuallyt agree on. I guess this would be similar to the
CIS visitor program.
Finally again, IBM could give the Betty Scott Provisional Army new typewriters
As I understand, the current ones are broken down leased ones because H&S
doesnt believe in office automation.
∂23-Sep-83 1133 @SU-SCORE.ARPA:ROD@SU-AI Re: IBM relations
Received: from SU-SCORE by SU-AI with TCP/SMTP; 23 Sep 83 11:32:48 PDT
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Fri 23 Sep 83 11:33:42-PDT
Date: 23 Sep 83 1131 PDT
From: Rod Brooks <ROD@SU-AI>
Subject: Re: IBM relations
To: golub@SU-SCORE, faculty@SU-SCORE
Besides money, IBM also has packaging technology and high density RAMs,
and some reasonably nice robots. The robotics group has two donated IBM
robots (over in Durand basement for reasons of space) and about one
student's worth of IBM money. At MIT AI lab we were able to get internal
research systems (i.e. things not yet announced or released as products)
-- it took the active support of TJ Watson research people who had
developed the stuff, plus some bullying of a VP at a CS-forum like event,
to bypass the internal IBM bureaucracy. IBM also built chips for the
connection machine project there, so any super-computer projects here
might look for some support.
Arranging for IBM researchers to take sabbatical here will take a lot of
Stanford lawyer time, and the delays can be expected to be considerable.
∂23-Sep-83 1216 GOLUB@SU-SCORE.ARPA Alumni letter
Received: from SU-SCORE by SU-AI with TCP/SMTP; 23 Sep 83 12:16:11 PDT
Date: Fri 23 Sep 83 12:15:13-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Alumni letter
To: faculty@SU-SCORE.ARPA
I received a copy of a rather thoughtful letter by an alumni, Claude Baudoin,
to the chairman of the Stanford Annual Fund. He complained that he hadn't
heard from the department for several years. I believe Bob Floyd wrote
a rather amusing letter when he was chairman.
I don't have the inclination or talent to write such a letter. Perhaps we
should all write a few paragraphs. We could of course send our last annual
report to the alumni. Would anyone like to volunteer to organise such an effort?
Perhaps some of the students could help out. Volunteers? Suggestions?
GENE
-------
∂23-Sep-83 1236 SCHMIDT@SUMEX-AIM mouse will be in the shop this afternoon
Received: from SUMEX-AIM by SU-AI with PUP; 23-Sep-83 12:36 PDT
Date: Fri 23 Sep 83 12:38:20-PDT
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM>
Subject: mouse will be in the shop this afternoon
To: HPP-Lisp-Machines@SUMEX-AIM
The replacement microswitch for the LM-2 mouse arrived today. Some
time this afternoon, I will dispatch an owl to bring the mouse in for repairs.
With luck, it'll be back in operation by the end of the day.
--Christopher
-------
∂23-Sep-83 2010 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #29
Received: from SU-SCORE by SU-AI with TCP/SMTP; 23 Sep 83 20:09:58 PDT
Date: Friday, September 23, 1983 5:56PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #29
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Saturday, 24 Sep 1983 Volume 1 : Issue 29
Today's Topics:
Representation - Assert & Retract
----------------------------------------------------------------------
Date: Friday, 23-Sep-83 19:57:34-BST
From: Richard.HPS ( on ERCC DEC-10 ) <OKeefe.R.A.@EDXA> )
Subject: Some Thoughts On Assert & Retract
You may already have seen my article in SigPlan where I say how
horrible it is to use assert & retract and you really shouldn't do it.
[ Unless the program is explicitly supposed to maintain a data base,
in which case you have no real alternative. That's my excuse anyway,
when people point the finger at records / recorded / erase in my
code. ] I thought it might be generally interesting if I pointed out
another pernicious effect of asserts and retracts on clauses.
It is a general principal of language design that if you don't
use a feature you shouldn't have to pay for it. Arithmetic
expressions are like that; C Prolog version 1.2D.EDAI has 'succ'
/ 2, and if you want to use a predicate that has some pretence of
reversibility, you can use succ / 2 instead of is / 2, and all you
pay is the code space for the expression evaluator. So is the cut:
the information that '!' needs to have kept for it to work has to
be kept anyway, and if you haven't got any cuts you aren't paying
anything for them.
Assert and retract are not like that. There is a substantial
overhead ( my guesstimate in C Prolog would be about 2-5% of the main
loop costs ) in protecting the interpreter ( or the compiled code's
state ) against the horrible things that assert and retract can do to
predicates that happen to be running.
1. The local stack is reclaimed on determinate exit. If you have
TRO, you can reclaim a little bit earlier than that. There is
a lookahead technique which can spot determinacy much earlier:
you label each clause with the principal functor of its first
argument ( this might be an integer, or NIL for a variable ),
and after you have found a clause that matches, you scan along
the clause list for the next clause with the right first functor,
and record that as the alternative instead of the next clause.
This gives you the space saving benefits that clause indexing
provides in the Dec-10 compiler, but not the time saving.
However: if you are allowed to retract clauses in running
predicates, you cannot use this technique. The clause you
noted as the alternative might be retract before you get there!
To get around this, whenever we retracted a clause we would have
to scan the entire local stack seeing whether this clause was in
there as an alternative, and doing something clever if it was.
Asserting also does nasty things. The clause you assert might
now be a valid alternative for a clause that has already
proceeded on the assumption that it had no alternatives
( E.g. if it had had a cut the cut would have done the wrong
thing ). Micro-PROLOG has an even worse form of this problem,
as it allows you to insert clauses in the middle of predicates,
so that a clause which had an alternative might find the new
clause popping up between itself and what it thought was its
alternative.
Thus the penalty you pay for allowing asserts and retracts on
predicates that might be running is that determinacy is not
spotted as soon as it might be, so that more local stack space
is used than strictly necessary, so that backtracking has a bit
more work to do detecting the hard way determinacy that should
have been spotted earlier, and TRO is considerably less
effective. { C Prolog currently lacks TRO. It could be added,
but I would prefer to waituntil this problem is solved. }
2. What gets really nasty is when you retract a clause you happen
to be running. In a structure sharing system, this means Every
retract, because when you do the pattern match that identifies
the clause to be retracted you are almost certain to bind
variables to molecules referencing skeletons in the retracted
clause. You simply Cannot afford to have a skeleton disappear
from under you. The solution, in both DEC-10 and C Prolog, is
that when you use a clause for the first time, a pointer to it
is put on the trail, and the clause is flagged as in use. When
you backtrack past this point, you can be sure that the clause
is no longer in use, and only then can you reclaim the space.
This means that in
increment(Counter) :-
retract(counter(Counter,M)),
succ(M, N),
assert(counter(Counter,N)).
the retracted clause is Still there in the data base, tying up
space, until the "increment" call Fails. This finally happens
at the top level, because the top level tends to look something
like
repeat, read(X), answer(X), fail.
answer(end←of←file) :- halt.
answer(X) :- call(X). % with stuff for printing answers
All the space you retracted is reclaimed at This fail, if no
sooner. This has puzzled a lot of people whose programs have
run out of space when the total amount of space they were
interested in hanging onto was really quite small. ( The
DEC-10 garbage collector collects garbage in the stacks, not in
the heap. )
So the possibility of a retracted clause still being in use
means that more trail space is tied up ( most procedure calls
end up putting an entry for the current clause on it, though of
course in a highly recursive program each clause will appear on
the trail at most once ), and that failing is more expensive
because the code that resets the trail cannot trust every entry
in the trail to be a pointer to a variable. In C, instead of
for (t = oldtr; t != tr; *(*t++) = NULL) ;
you have to have
for (t = oldtr; t != tr; )
if (is←a←variable(*t)) *(*t++) = NULL;
else maybe←reclaim←clause(*t++);
[ This is Not taken from C Prolog ]
A structure copying system can avoid some of this hassle.
When you pattern match against the clause you are retracting,
you no longer end up with pointers into that clause. ( Even
this isn't true in a good structure copying system which
shares ground subterms. ) However, you still have to protect
against retracting a clause which is running. E.g.,
p :-
clause(p,Body,Ref),
erase(Ref),
fail.
p :-
normal←p.
So you still have to mark and trail those clauses. Not only can
you not reclaim the space of such clauses, you can't remove them
from the clause chain. retract, for example, has to be able to
backtrack through the entire clause chain, and the clause chain
it sees has to remain intact even in the presence of other
procedures retracting from the same predicate. E.g.
p :-
clause(p,Body,Ref),
erase(Ref),
q.
p :-
write('Gotcha!'), nl.
p :-
write('Hey, it worked!'), nl.
q :-
clause(p,Body,Ref),
erase(Ref),
fail.
The question ?- p is supposed to throw away the first two
clauses and print "Hey, it worked!".
So a clause which is erased but in use has to remain in the
clause chain. This means that the interpreter has to check
Every clause it finds to see if it has been erased. ( In
compiled code you could maybe smash one of the instructions
to be FAIL. But you would still be running some of the code
of erased clauses. )
The moral is that even if you don't use assert and retract at all,
you are paying a quite a high price for the possibility.
Version 1.2D.EDAI of C Prolog attains 1000 LIPS on a VAX 750 ( yes,
it is in C and C only ), my estimate is that another 5% speed increase
could be guaranteed if it were not for assert and retract. ( Mind
you, I think that with 300 lines of assembler code I could get
another 20%, but that is another story. ) There would also be a
larger saving of stack space ( just how large depends on how
determinate your programs are ) and implementing TRO would be easier.
The question is, what can we do about it? There is some reason
for hope:
1. Well-written programs don't change running predicates. They may
change tables which a structure-sharing system thinks of as
"running", but having a special mechanism for changeable tables
seems acceptable to me.
2. There are other reasons for prohibiting changes to predicates.
Most compilers ( DEC-10, POPLOG, Micro-Prolog ) are unable to
change compiled predicates except by replacing them completely.
The Prolog-X ( ZIP ) system can handle it, but at the price of
only compiling clauses, and not compiling predicates as such.
3. Throwing a predicate completely away does Not entail all these
overheads. We do have to detect which clauses are running, but
if we forcibly reset their alternative pointer to NIL that makes
as much sense as anything.
4. Using the "recorded" data base in C Prolog entails some of these
overheads but not all. This is really 1. again.
5. There are some good indexing methods for handling tables based on
dynamic hashing. See papers by John W. Lloyd et al from the
University of Melbourne. He has also done some work on indexing
unit clauses with variables. However, there isn't much point in
applying these methods to " program-like " predicates.
I am very much a fan of DEC-10 Prolog, but its authors would agree
that unsurpassed though it is, it is not the last word in logic
programming. In particular, the assert / retract / records /
recorded / instance / erase stuff was never really designed, just
sort of grown, and there is no Prolog specification in existence
which says what it should do.
The data base stuff was debugged by eliminating surprises as they
showed up, this of course depends on what surprises whom as the
recent controversy about negation has shown. ( It seems to boil
down to a question of what the scope rules are, rather than anything
substantial about negation. ) We can and Must find a cleaner way of
specifying changes to the data base. I am so saturated in the way
things currently work that I can't think of anything much better.
If anybody out there has any good ideas, please broadcast them,
even if they're only half-baked. There are N new Prolog
implementations, and too many of them have made the data base stuff
More complicated and Less logical. What we need is something
Simpler that what DEC-10 and C Prolog provide. ( E.g. no
data-base-references please. )
------------------------------
End of PROLOG Digest
********************
∂24-Sep-83 1354 lantz%SU-HNV.ARPA@SU-SCORE.ARPA Re: IBM relations
Received: from SU-SCORE by SU-AI with TCP/SMTP; 24 Sep 83 13:54:06 PDT
Received: from Diablo by Score with Pup; Sat 24 Sep 83 13:55:19-PDT
Date: Sat, 24 Sep 83 13:55:05 PDT
From: Keith Lantz <lantz@Diablo>
Subject: Re: IBM relations
To: Gene Golub <GOLUB@SU-SCORE.ARPA>
Cc: faculty@SU-SCORE.ARPA
In-Reply-To: Your message of Thu 22 Sep 83 23:33:11-PDT.
Yesterday, I spent some time with Horace Flatt, director of PASC,
discussing possible avenues of cooperation. Specifically, it appears
that IBM is interested in giving Stanford large amounts of equipment
for "teaching/educational" purposes, as opposed to research money.
Even more specifically, one machine of interest is an 801-based, 2 MIPS
workstation with >= 1 Mbyte of memory, ethernet interface, C compiler, etc. --
i.e., at least as good as a SUN. It is possible that we could acquire
20 or more of these machines, with supporting network servers, to act
as another teaching lab, in the vein of the one being acquired via DEC.
And, since it is unlikely that research money would be forthcoming (due
to the continuing inability of SPO and IBM lawyers to work out their
differences), IBM wouldn't require much in return -- visibility and,
say, a royalty-free interal-use license for software we bring up
(e.g. the V-System).
This wouldn't happen until sometime next year, so if anyone else is
interested in the possibilities, please come talk with me. As to pure
research contracts, I would rather wait until SPO, in particular, gets its act
together with respect to IBM.
Keith
∂25-Sep-83 1147 WIEDERHOLD%SUMEX-AIM.ARPA@SU-SCORE.ARPA Re: IBM relations
Received: from SU-SCORE by SU-AI with TCP/SMTP; 25 Sep 83 11:47:44 PDT
Received: from SUMEX-AIM by Score with Pup; Sun 25 Sep 83 11:48:22-PDT
Date: Sun 25 Sep 83 11:49:41-PDT
From: Gio Wiederhold <WIEDERHOLD@SUMEX-AIM>
Subject: Re: IBM relations
To: cheriton@Diablo
cc: GOLUB@Score, faculty@Score
In-Reply-To: Your message of Fri 23 Sep 83 09:36:49-PDT
Dave is very right in regard to SPO relations. I had to abondon
IBM research contracts and convert and finish them as consulting
because of legal interference. The loss to the university was in
overhead, and to me because I just chrged my University salary * 1.21 * 1.68
to them, as the original agreement read.
If Gene's contact iis high up that should be the prerequisite for
improved contacts.
Gio
-------
∂25-Sep-83 1736 LAWS@SRI-AI.ARPA AIList Digest V1 #62
Received: from SRI-AI by SU-AI with TCP/SMTP; 25 Sep 83 17:35:28 PDT
Date: Sunday, September 25, 1983 4:27PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #62
To: AIList@SRI-AI
AIList Digest Sunday, 25 Sep 1983 Volume 1 : Issue 62
Today's Topics:
Language Understanding & Scientific Method,
Conferences - COLING 84
----------------------------------------------------------------------
Date: 19 Sep 83 17:50:32-PDT (Mon)
From: harpo!utah-cs!shebs @ Ucb-Vax
Subject: Re: Natural Language Understanding
Article-I.D.: utah-cs.1914
Lest usenet readers think things had gotten silent all at once, here's
an article by Fernando Pereira that (apparently and inexplicably) was
*not* sent to usenet, and my reply (fortunately, I now have read-only
access to Arpanet, so I was able to find out about this)
←←←←←←←←←←←←←←←←←←←←←
Date: Wed 31 Aug 83 18:42:08-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Solutions of the natural language analysis problem
[I will abbreviate the following since it was distributed in V1 #53
on Sep. 1. -- KIL]
Given the downhill trend of some contributions on natural language
analysis in this group, this is my last comment on the topic, and is
essentially an answer to Stan the leprechaun hacker (STLH for short).
[...]
Lack of rigor follows from lack of method. STLH tries to bludgeon us
with "generating *all* the possible meanings" of a sentence. Does he
mean ALL of the INFINITY of meanings a sentence has in general? Even
leaving aside model-theoretic considerations, we are all familiar with
he wanted me to believe P so he said P
he wanted me to believe not P so he said P because he thought
that I would think that he said P just for me to believe P
and not believe it
and so on ...
in spy stories.
[...]
Fernando Pereira
←←←←←←←←←←←←←←←←←←←
The level of discussion *has* degenerated somewhat, so let me try to
bring it back up again. I was originally hoping to stimulate some
debate about certain assumptions involved in NLP, but instead I seem
to see a lot of dogma, which is *very* dismaying. Young idealistic me
thought that AI would be the field where the most original thought was
taking place, but instead everyone seems to be divided into warring
factions, each of whom refuses to accept the validity of anybody
else's approach. Hardly seems scientific to me, and certainly other
sciences don't evidence this problem (perhaps there's some fundamental
truth here - that the nature of epistemology and other AI activities
are such that it's very difficult to prevent one's thought from being
trapped into certain patterns - I know I've been caught a couple
times, and it was hard to break out of the habit - more on that later)
As a colleague of mine put it, we seem to be suffering from a
"difference in context". So let me describe the assumptions
underpinning my theory (yes I do have one):
1. Language is a very fuzzy thing. More precisely, the set of sound
strings meaningful to a human is almost (if not exactly) the set of
all possible sound strings. Now, before you flame, consider: Humans
can get at least *some* understanding out of a nonsense sequence,
especially if they have any expectations about what they're hearing
(this has been demonstrated experimentally) although it will likely be
wrong. Also, they can understand mispronounced or misspelled words,
sentences with missing words, sentences with repeated words, sentences
with scrambled word order, sentences with mixed languages (I used to
have fun by speaking English using German syntax, and you can
sometimes see signs using English syntax with "German" words), and so
forth. Language is also used creatively (especially netters!). Words
are continually invented, metaphors are created and mixed in novel
ways. I claim that there is no rule of grammar that cannot be
violated. Note that I have said *nothing* about changes of meaning,
nor have I claimed that one could get much of anything out of a random
sequence of words strung together. I have only claimed that the set
of linguistically valid utterances is actually a large fuzzy set (in
the technical sense of "fuzzy"). If you accept this, the implications
for grammar are far-reaching
- in fact, it may be that classical grammar is a curious but basically
irrelevant description of language (however, I'm not completely
convinced of that).
2. Meaning and interpretation are distinct. Perhaps I should follow
convention and say "s-meaning" and "s-interpretation", to avoid
terminology trouble. I think it's noncontroversial that the "true
meaning" of an utterance can be defined as the totality of response to
that utterance. In that case, s-meaning is the individual-independent
portion of meaning (I know, that's pretty vague. But would saying
that 51% of all humans must agree on a meaning make it any more
precise? Or that there must be a predicate to represent that meaning?
Who decides which predicate is appropriate?). Then s-interpretation
is the component that depends primarily on the individual and his
knowledge, etc.
Let's consider an example - "John kicked the bucket." For most
people, this has two s-meanings - the usual one derived directly from
the words and an idiomatic way of saying "John died". Of course,
someone may not know the idiom, so they can assign only one s-meaning.
But as Mr. Pereira correctly points out, there are an infinitude of
s-interpretations, which will completely vary from individual to
individual. Most can be derived from the s-meaning, for instance the
convoluted inferences about belief and intention that Mr. Pereira
gave. On the other hand, I don't normally make those
s-interpretations, and a "naive" person might *never* do so. Other
parts of the s-interpretation could be (if the second s-meaning above
was intended) that the speaker tends to be rather blunt; certainly a
part of the response to the utterance, but is less clearly part of a
"meaning". Even s- meanings are pretty volatile though - to use
another spy story example, the sentence might actually be a code
phrase with a completely arbitrary meaning!
3. Cognitive science is relevant to NLP. Let me be the first to say
that all of its results are at best suspect. However, the apparent
inclination of many AI people to regard the study of human cognition
as "unscientific" is inexplicable. I won't claim that my program
defines human cognition, since that degree of hubris requires at least
a PhD :-) . But cognitive science does have useful results, like the
aforementioned result about making sense out of nonsense. Also, lot
of common-sense results can be more accurately described by doing
experiments. "Don't think of a zebra for the next ten minutes" - my
informal experimentation indicates that *nobody* is capable - that
seems to say a lot about how humans operate. Perhaps cognitive
science gets a bad review because much of it is Gedanken experiments;
I don't need tests on a thousand subjects to know that most kinds of
ungrammaticality (such as number agreement) are noticeable, but rarely
affect my understanding of a sentence. That's why I say that humans
are experts at their own languages - we all (at least intuitively)
understand the different parts of speech and how sentences are put
together, even though we have difficulty expressing that knowledge
(sounds like the knowledge engineer's problems in dealing with
experts!). BTW, we *have* had a non- expert (a CS undergrad) add
knowledge to our NLP system, and the folks at Berkeley have reported
similar results [Wilensky81].
4. Theories should reflect reality. This is especially important
because the reverse is quite pernicious - one ignores or discounts
information not conforming to one's theories. The equations of motion
are fine for slow-speed behavior, but fail as one approaches c (the
language or the velocity? :-) ). Does this mean that Lorenz
contractions are experimental anomalies? The grammar theory of
language is fine for very restricted subsets of language, but is less
satisfactory for explaining the phenomena mentioned in 1., nor does it
suggest how organisms *learn* language. Mr. Pereira's suggestion that
I do not have any kind of theoretical basis makes me wonder if he
knows what Phrase Analysis *is*, let alone its justification.
Wilensky and Arens of UCB have IJCAI-81 papers (and tech reports) that
justify the method much better than I possibly could. My own
improvement was to make it follow multiple lines of parsing (have to
be contrite on this; I read Winograd's new book recently and what I
have is really a sort of active chart parser; also noticed that he
gives nary a mention to Phrase Analysis, which is inexcusable - that's
the sort of thing I mean by "warring factions").
4a. Reflecting reality means "all of it" or (less preferable) "as
much as possible". Most of the "soft sciences" get their bad
reputation by disregarding this principle, and AI seems to have a
problem with that also. What good is a language theory that cannot
account for language learning, creative use of language, and the
incredible robustness of language understanding? The definition of
language by grammar cannot properly explain these - the first because
of results (again mentioned by Winograd) that children receive almost
no negative examples, and that a grammar cannot be learned from
positive examples alone, the third because the grammar must be
extended and extended until it recognizes all strings as valid. So
perhaps the classical notion of grammar is like classical mechanics -
useful for simple things, but not so good for photon drives or
complete NLP systems. The basic notions in NLP have been thoroughly
investigated;
IT'S TIME TO DEVELOP THEORIES THAT CAN EXPLAIN *ALL* ASPECTS OF
LANGUAGE BEHAVIOR!
5. The existence of "infinite garden-pathing". To steal an example
from [Wilensky80],
John gave Mary a piece of his.........................mind.
Only the last word disambiguates the sentence. So now, what did *you*
fill in, before you read that last word? There's even more
interesting situations. Part of my secret research agenda (don't tell
Boeing!) has been the understanding of jokes, particularly word plays.
Many jokes are multi-sentence versions of garden- pathing, where only
the punch line disambiguates. A surprising number of crummy sitcoms
can get a whole half-hour because an ambiguous sentence is interpreted
differently by two people (a random thought - where *did* this notion
of sentence as fundamental structure come from? Why don't speeches
and discourses have a "grammar" precisely defining *their*
structure?). In general, language is LR(lazy eight).
Miscellaneous comments:
This has gotten pretty long (a lot of accusations to respond to!), so
I'll save the discussion of AI dogma, fads, etc for another article.
When I said that "problems are really concerned with the acquisition
of linguistic knowledge", that was actually an awkward way to say
that, having solved the parsing problem, my research interests
switched to the implementation of full-scale error correction and
language learning (notice that Mr. Pereira did not say "this is
ambiguous - what did you mean?", he just assumed one of the meanings
and went on from there. Typical human language behavior, and
inadequately explained by most existing theories...). In fact, I have
a detailed plan for implementation, but grad school has interrupted
that and it may be a while before it gets done. So far as I can tell,
the implementation of learning will not be unusually difficult. It
will involve inductive learning, manipulation of analogical
representations to acquire meanings ("an mtrans is like a ptrans, but
with abstract objects"....), and other good things. The
nonrestrictive nature of Phrase Analysis seems to be particularly
well-suited to language knowledge acquisition.
Thanks to Winograd (really quite a good book, but biased) I now know
what DCG's are (the paper I referred to before was [Pereira80]). One
of the first paragraphs in that paper was revealing. It said that
language was *defined* by a grammar, then proceeded from there.
(Different assumptions....) Since DCG's were compared only to ATN's,
it was of course easy to show that they were better (almost any
formalism is better than one from ten years before, so that wasn't
quite fair). However, I fail to see any important distinction between
a DCG and a production rule system with backtracking. In that case, a
DCG is really a special case of a Phrase Analysis parser (I did at one
time tinker with the notion of compiling phrase rules into OPS5 rules,
but OPS5 couldn't manage it very well - no capacity for the
parallelism that my parser needed). I am of course interested in
being contradicted on any of this.
Mr. Pereira says he doesn't know what the "Schank camp" is. If that's
so then he's the only one in NLP who doesn't. I have heard some
highly uncomplimentary comments about Schank and his students. But
then that's the price for going against conventional wisdom...
Sorry for the length, but it *was* time for some light rather than
heat! I have refrained from saying much of anything about my theories
of language understanding, but will post details if accusations
warrant :-)
Theoretically yours*,
Stan (the leprechaun hacker) Shebs
utah-cs!shebs
* love those double meanings!
[Pereira80] Pereira, F.C.N., and Warren, D.H.D. "Definite Clause
Grammars for Language Analysis - A Survey of the Formalism and
a Comparison with Augmented Transition Networks", Artificial
Intelligence 13 (1980), pp 231-278.
[Wilensky80] Wilensky, R. and Arens, Y. PHRAN: A Knowledge-based
Approach to Natural Language Analysis (Memorandum No.
UCB/ERL M80/34). University of California, Berkeley, 1980.
[Wilensky81] Wilensky, R. and Morgan, M. One Analyzer for Three
Languages (Memorandum No. UCB/ERL M81/67). University of
California, Berkeley, 1981.
[Winograd83] Winograd, T. Language as a Cognitive Process, vol. 1:
Syntax. Addison-Wesley, 1983.
------------------------------
Date: Fri 23 Sep 83 14:34:44-CDT
From: Lauri Karttunen <Cgs.Lauri@UTEXAS-20.ARPA>
Subject: COLING 84 -- Call for papers
[Reprinted from the UTexas-20 bboard.]
CALL FOR PAPERS
COLING 84, TENTH INTERNATIONAL CONFERENCE ON COMPUTATIONAL LINGUISTICS
COLING 84 is scheduled for 2-6 July 1984 at Stanford University,
Stanford, California. It will also constitute the 22nd Annual Meeting
of the Association for Computational Linguistics, which will host the
conference.
Papers for the meeting are solicited on linguistically and
computationally significant topics, including but not limited to the
following:
o Machine translation and machine-aided translation.
o Computational applications in syntax, semantics, anaphora, and
discourse.
o Knowledge representation.
o Speech analysis, synthesis, recognition, and understanding.
o Phonological and morpho-syntactic analysis.
o Algorithms.
o Computational models of linguistic theories.
o Parsing and generation.
o Lexicology and lexicography.
Authors wishing to present a paper should submit five copies of a
summary not more than eight double-spaced pages long, by 9 January
1984 to: Prof. Yorick Wilks, Languages and Linguistics, University of
Essex, Colchester, Essex, CO4 3SQ, ENGLAND [phone: 44-(206)862 286;
telex 98440 (UNILIB G)].
It is important that the summary contain sufficient information,
including references to relevant literature, to convey the new ideas
and allow the program committee to determine the scope of the work.
Authors should clearly indicate to what extent the work is complete
and, if relevant, to what extent it has been implemented. A summary
exceeding eight double-spaced pages in length may not receive the
attention it deserves.
Authors will be notified of the acceptance of their papers by 2 April
1984. Full length versions of accepted papers should be sent by 14
May 1984 to Dr. Donald Walker, COLING 84, SRI International, Menlo
Park, California, 94025, USA [phone: 1-(415)859-3071; arpanet:
walker@sri-ai].
Other requests for information should be addressed to Dr. Martin Kay,
Xerox PARC, 3333 Coyote Hill Road, Palo Alto, California 94304, USA
[phone: 1-(415)494-4428; arpanet: kay@parc].
------------------------------
End of AIList Digest
********************
∂25-Sep-83 2055 LAWS@SRI-AI.ARPA AIList Digest V1 #63
Received: from SRI-AI by SU-AI with TCP/SMTP; 25 Sep 83 20:54:48 PDT
Date: Sunday, September 25, 1983 7:47PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #63
To: AIList@SRI-AI
AIList Digest Monday, 26 Sep 1983 Volume 1 : Issue 63
Today's Topics:
Robotics - Physical Strength,
Parallelism & Physiology,
Intelligence - Turing Test,
Learning & Knowledge Representation,
Rational Psychology
----------------------------------------------------------------------
Date: 21 Sep 83 11:50:31-PDT (Wed)
From: ihnp4!mtplx1!washu!eric @ Ucb-Vax
Subject: Re: Strong, agile robot
Article-I.D.: washu.132
I just glanced at that article for a moment, noting the leg mechanism
detail drawing. It did not seem to me that the beastie could move
very fast. Very strong IS nice, tho... Anyway, the local supplier of
that mag sold them all. Anyone remember if it said how fast it could
move, and with what payload?
eric ..!ihnp4!washu!eric
------------------------------
Date: 23 Sep 1983 0043-PDT
From: FC01@USC-ECL
Subject: Parallelism
I thought I might point out that virtually no machine built in the
last 20 years is actually lacking in parallelism. In reality, just as
the brain has many neurons firing at any given time, computers have
many transistors switching at any given time. Just as the cerebellum
is able to maintain balance without the higher brain functions in the
cerebrum explicitly controlling the IO, most current computers have IO
controllers capable of handling IO while the CPU does other things.
Just as people have faster short term memory than long term memory but
less of it, computers have faster short term memory than long term
memory and use less of it. These are all results of cost/benefit
tradeoffs for each implementation, just as I presume our brains and
bodies are. Don't be so fast to think that real computer designers are
ignorant of physiology. The trend towards parallelism now is more like
the human social system of having a company work on a problem. Many
brains, each talking to each other when they have questions or
results, each working on different aspects of a problem. Some people
have breakdowns, but the organization keeps going. Eventually it comes
up with a product, although it may not really solve the problem posed
at the beginning, it may have solved a related problem or found a
better problem to solve.
Another copyrighted excerpt from my not yet finished book on
computer engineering modified for the network bboards, I am ever
yours,
Fred
------------------------------
Date: 14 Sep 83 22:46:10-PDT (Wed)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: in defense of Turing - (nf)
Article-I.D.: uiucdcs.2822
Two points where Martin Taylor's response reveals that I was not
emphatic enough [you see, it is possible to underflame, and thus be
misunderstood!] in my comments on the Turing test.
1. One of Dennett's main points (which I did not mention, since David
Rogers had already posted it in the original note of this string) is
that the unrestricted Turing-like test of which he spoke is a
SUFFICIENT, but not a NECESSARY test for intelligence comparable to
that possessed and displayed by most humans in good working order. [I
myself would add that it tests as much for mastery of human
communication skills (which are indeed highly dependent on particular
cultures) as it does for intelligence.] That is to say, if a program
passes such a rigorous test, then the practitioners of AI may
congratulate themselves for having built such a clever beast.
However, a program which fails such a test need not be considered
unintelligent. Indeed, a human which fails such a test need not be
considered unintelligent -- although one would probably consider
him/her to be of substandard intelligence, or of impaired
intelligence, or dyslexic, or incoherent, or unconscious, or amnesic,
or aphasic, or drunk (i.e. disabled in some fashion).
2. I did not post "a set of criteria which an AI system should pass to
be accepted as human-like at a variety of levels." I posted a set of
tests by which to gauge progress in the field of AI. I don't imagine
that these tests have anything to do with human-ness. I also don't
imagine that many people who discuss and discourse upon "intelligence"
have any coherent definition for what it might be.
Other comments that seem relevant (but might not be)
----- -------- ---- ---- -------- ---- ----- --- ---
Neither Dennett's test, nor my tests are intended to discern whether
or not the entity in question possesses a human brain.
In addition to flagrant use of hindsight, my tests also reveal my bias
that science is an endeavor which requires intelligence on the part of
its human practitioners. I don't mean to imply that it is the only
such domain. Other domains which require that the people who live in
them have "smarts" are puzzle solving, language using, language
learning (both first and second), etc. Other tasks not large enough
to qualify as domains that require intelligence (of a degree) from
people who do them include: figuring out how to use a paper clip or a
stapler (without being told or shown), figuring out that someone was
showing you how to use a stapler (without being told that such
instruction was being given), improvising a new tool or method for a
routine task that one is accustomed to doing with an old tool or
method, realizing that an old method needs improvement, etc.
The interdependence of intelligence and culture is much more important
that we usually give it credit for. Margaret Mead must have been
quite a curiousity to the peoples she studied. Imagine that a person
of such a different and strange (to us) culture could be made to
understand enough about machines and the Turing test so that he/she
could be convinced to serve as an interlocutor... On second thought,
that opens up such a can of worms that I'd rather deny having proposed
it in the first place.
------------------------------
Date: 19 Sep 83 17:43:53-PDT (Mon)
From: harpo!utah-cs!shebs @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: utah-cs.1913
I just read Jon Doyle's article about Rational Psychology in the
latest AI Magazine (Fall '83), and am also very interested in the
ideas therein. The notion of trying to find out what is *possible*
for intelligences is very intriguing, not to mention the idea of
developing some really sound theories for a change.
Perhaps I could mention something I worked on a while back that
appears to be related. Empirical work in machine learning suggests
that there are different levels of learning - learning by being
programmed, learning by being told, learning by example, and so forth,
with the levels being ordered by their "power" or "complexity",
whatever that means. My question: is there something fundamental
about this classification? Are there other levels? Is there a "most
powerful" form of learning, and if so, what is it?
I took the approach of defining "learning" as "behavior modification",
even though that includes forgetting (!), since I wasn't really
concerned with whether the learning resulted in an "improvement" in
behavior or not. The model of behavior was somewhat interesting.
It's kind of a dualistic thing, consisting of two entities: the
organism and the environment. The environment is everything outside,
including the organsism's own physical body, while the organism is
more or less equivalent to a mind. Each of these has a state, and
behavior can be defined as functions mapping the set of all states to
itself. Both the environment and the organism have behaviors that can
be treated in the same way (that is, they are like mirror images of
each other). The whole development is too elaborate for an ASCII
terminal, but it boiled down to this: that since learning is a part
of behavior, but it also *modifies* behavior, then there is a part of
the behavior function that is self-modifying. One can then define
"1st order learning" as that which modifies ordinary behavior. 2nd
order learning would be "learning how to learn", 3rd order would be
"learning how to learn how to learn" (whatever *that* means!). The
definition of these is more precise than my Anglicization here, and
seem to indicate a whole infinite heirarchy of learning types, each
supposedly more powerful than the last. It doesn't do much for my
original questions, because the usual types of learning are all 1st
order - although they don't have to be. Lenat's work on learning
heuristics might be considered 2nd order, and if you look at it in the
right way, it may actually be that EURISKO actually implements all
orders of learning at the same time, so the above discussion is
garbage (sigh).
Another question that has concerned me greatly (particularly since
building my parser) is the relation of the Halting Problem to AI. My
program was basically a production system, and had an annoying
tendency to get caught in infinite loops of various sorts. More
misfeatures than bugs, though, since the theory did not expressly
forbid such loops! To take a more general example, why don't circular
definitions cause humans to go catatonic? What is the mechanism that
seems to cut off looping? Do humans really beat the Halting Problem?
One possible mechanism is that repetition is boring, and so all loops
are cut off at some point or else pushed so far down on the agenda of
activities that they are effectively terminated. What kind of theory
could explain this?
Yet another (last one folks!) question is one that I raised a while
back, about all representations reducing down to attribute-value
pairs. Yes, they used to be fashionable but are now out of style, but
I'm talking about a very deep underlying representation, in the same
way that the syntax of s-expressions underlies Lisp. Counterexamples
to my conjecture about AV-pairs being universal were algebraic
expressions (which can be turned into s-expressions, which can be
turned into AV-pairs) and continuous values, but they must have *some*
closed form representation, which can then be reduced to AV-pairs. So
I remained unconvinced that the notion of objects with AV-pairs
attached is *not* universal (of course, for some things, the
representation is so primitive as to be as bad as Fortran, but then
this is an issue of possibility, not of goodness or efficiency).
Looking forward to comments on all of these questions...
stan the l.h.
utah-cs!shebs
------------------------------
Date: 22 Sep 83 11:26:47-PDT (Thu)
From: ihnp4!drux3!drufl!samir @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: drufl.663
To me personally, Rational Psychology is a misnomer.
"Rational" negates what "Psychology" wants to understand.
Flames to /dev/null.
Interesting discussions welcome.
Samir Shah
drufl!samir
AT&T Information Systems, Denver.
------------------------------
Date: 22 Sep 83 17:12:11-PDT (Thu)
From: ihnp4!houxm!hogpc!houti!ariel!norm @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: ariel.456
Samir's view: "To me personally, Rational Psychology
is a misnomer. "Rational" negates
what "Psychology" wants to understand."
How so?
Can you support your claim? What does psychology want to understand
that Rationality negates? Psychology is the Logos of the Psyche or
the logic of the psyche. How does one understand without logic? How
does one understand without rationality? What is understand? Isn't
language itself dependent upon the rational faculty, or more
specifically, upon the ability to form concepts, as opposed to
percepts? Can you understand without language? To be totally without
rationality (lacking the functional capacity for rationality
- the CONCEPTUAL faculty) would leave you without language, and
therefore without understanding. In what TERMS is something said to
be understood? How can terms have meaning without rationality?
Or perhaps you might claim that because men are not always rational
that man does not possess a rational faculty, or that it is defective,
or inadequate? How about telling us WHY you think Rational negates
Psychology?
These issues are important to AI, psychology and philosophy
students... The day may not be far off when AI research yields
methods of feature abstraction and integration that approximate
percept-formation in humans. The next step, concept formation, will
be much harder. How does an epistemology come about? What are the
sequential steps necessary to form an epistemology of any kind? By
what method does the mind (what's that?) integrate percepts into
concepts, make identifications on a conceptual level ("It is an X"),
justify its identifications ("and I know it is an X because..."), and
then decide (what's that?) what to do about it ("...so therefore I
should do Y")?
Do you seriously think that understanding these things won't take
Rationality?
Norm Andrews, AT&T Information Systems, Holmdel, N.J. ariel!norm
------------------------------
Date: 22 Sep 83 12:02:28-PDT (Thu)
From: decvax!genrad!mit-eddie!mit-vax!eagle!mhuxi!mhuxj!mhuxl!achilles
!ulysses!princeton!leei@Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: princeto.77
I really think that the ability that we humans have that allows us to
avoid looping is the simple ability to recognize a loop in our logic
when it happens. This comes as a direct result of our tendency for
constant self- inspection and self-evaluation. A machine with this
ability, and the ability to inspect its own self-inspections . . .,
would probably also be able to "solve" the halting problem.
Of course, if the loop is too subtle or deep, then even we cannot see
it. This may explain the continued presence of various belief systems
that rely on inherently circular logic to get past their fundamental
problems.
-Lee Iverson
..!princeton!leei
------------------------------
End of AIList Digest
********************
∂26-Sep-83 0605 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #30
Received: from SU-SCORE by SU-AI with TCP/SMTP; 26 Sep 83 06:05:01 PDT
Date: Sunday, September 25, 1983 2:51PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #30
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Monday, 26 Sep 1983 Volume 1 : Issue 30
Today's Topics:
Architecture - A Parallel Challenge,
Announcement - FGCS'84 Conference
----------------------------------------------------------------------
Date: Sunday, 25-Sep-83 21:34:53-BST
From: OKeefe.R.A. <OKeefe.R.A.%EDXA@UCL-CS>
Subject: A Challenge for Parallel Logic Architectures
There are quite a few people getting into parallel logic programming.
There seem to be two main streams:
- data flow ( IC's ALICE machine, CIT's token machine )
- MIMD ( PRISM, DADO, a couple of others )
The MIMD approach has N separate processors ( usually some sort of
micro ) and a communication network, and clauses get distributed
across machines in such a way that when you have a goal G a large
fraction of the N machines have one or more clauses that they can
work on in parallel. Great. The trouble is communicating the
answers ( and the new subgoals ). Some of the systems I have
looked at do a better job of reducing the number of messages than
other, but the real trouble is that in logic programs ( as opposed
to logic query languages ) the size of a goal can grow without
limit. Before people jump on me and point out that you can avoid
the exponential growth in
f(X,X) where X = f(Y,Y) and Y = f(Z,Z) & ... & W = f(a,a)
that isn't what I'm talking about. I'm talking about a simple
little thing like an assembler written in Prolog, working on a file
represented as a list of characters, producing a list of
instructions, and then maybe doing a couple of passes on the list of
instructions, not to mention carrying around a symbol table as a 2-3
tree. The data structures can grow rather large. On a single
machine you just pass around pointers; you seem to need some sort of
global memory for this ( or at any rate a global addressing scheme,
even if not all processors can access all parts of the memory ).
The challenge is this:
- Write an assembler for a large fraction of Tannenbaum's EM-1
imaginary machine, or perhaps for OCODE, and if you have time,
a peephole optimiser.
- Write an assembly code program ( for concreteness, implement
Sedgewick's version of Quicksort for integers, as described
in CACM ) and assemble it using your assembler, running under
a simulator of your architecture
- Obtain statistics on the number and size of messages
transmitted;
- Publish the results! ( Even in this Digest. )
At the moment, everyone seems to be thinking in terms of function-free
non-recursive programs, and that is a Good Thing, because most Prolog
programs have quite a bit of data of that sort. But there are other
sorts of programs we'd like to go fast too. I WANT to be convinced
that Practical Parallel Prolog is just round the corner. Will you
have a go at convincing me? Thanks.
------------------------------
Date: Sat 24 Sep 83 18:53:35-PDT
From: David Warren <Warren@SRI-AI>
Subject: FGCS'84 Conference
CALL FOR PAPERS - FGCS'84
International Conference on Fifth Generation Computer Systems, 1984
Institute for New Generation Computer Technology
Tokyo, Japan, November 6-9, 1984
The scope of technical sessions of this conference encompasses the
technical aspects of new generation computer systems which are being
explored particularly within the framework of logic programming and
novel architectures. This conference is intended to promote
interaction among researchers in all disciplines related to fifth
generation computer technology. The topics of interest include
( but are not limited to ) the following:
PROGRAM AREAS
Foundations for Logic Programs
- Formal semantics / pragmatics
- Computational models
- Program analysis and complexity
- Philosophical aspects
- Psychological aspeects
Logic Programming Languages / Methodologies
- Parallel / object-oriented programming languages
- Meta-level inferences / control
- Intelligent programming environments
- Program synthesis / understanding
- Program transformation / verification
Architectures for New Generation Computing
- Inference machines
- Knowledge base machines
- Parallel processing architectures
- VLSI architectures
- Novel human-machine interfaces
Applications of New Generation Computing
- Knowledge representation / acquisition
- Expert systems
- Natural language understanding / machine translation
- Graphics / vision
- Games / simulation
Impacts of New Generation Computing
- Social / cultural
- Educational
- Economic
- Industrial
- International
ORGANIZATION OF THE CONFERENCE
Conference Chairman: Tohru Moto-oka, University of Tokyo
Conference V.Chairman: Kazuhiro Fuchi, ICOT
Program Chairman: Hideo Aiso, Keio University
Publicity Chairman: Kinko Yamamoto, JIPDEC
Secretariat: FGCS'84 Secretariat
Institute for New Generation Computer Technology (ICOT)
Mita Kokusai Bldg. 21F
1-4-28 Mita, Minato-ku
Tokyo 108, Japan
Phone: 03-456-3195 Telex: 32964 ICOT
PAPER SUBMISSION REQUIREMENTS
Four copies of manuscripts should be submitted by April 15, 1984 to:
Prof. Hideo Aiso
Program Chairman
ICOT
Mita Kokusai Bldg. 21F
1-4-28 Mita, Minato-ku
Tokyo 108, Japan
Papers are restricted to 20 double-spaced pages ( about 5000
words ) including figures. Each paper must contain a 200-250
word abstract. Papers must be written and presented in English.
Papers will be reviewed by international referees. Authors will
be notified of acceptance by June 30, 1984, and will be given
instructions for final preparation of their papers at that time.
Camera-ready papers for the proceedings should be sent to the
Program Chairman prior to August 31, 1984.
( Intending authors are requested to return a reply card with
tentative subjects ).
GENERAL INFORMATION
Date: November 6-9, 1984
Venue: Keio Plaza Hotel, Tokyo, Japan
Host: Institute for New Generation Computer Technology
Outline of the Conference Program:
General sessions
Keynote speeches
Report of research activities on Japan's FGCS Project
Panel discussions
Technical sessions ( Parallel sessions )
Presentation by invited speakers
Presentation of submitted papers
Special events
Demonstration of current research results
Technical visit
Official languages: English / Japanese
Participants: 600
Further information:
Conference information will be available in December, 1983.
( For information, complete and return a reply card ).
REPLY CARD
Name: (Prof. Dr. Mr. Ms.) ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Family Middle First
Affiliation (position/organization): ←←←←←←←←←←←←←←←←←←←←←←←←←←←
Affiliation address: ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Phone: ←←←←←←←←←←←
* I wish to submit a paper with tentative subject: ←←←←←←←←←←←←←
* I wish to attend FGCS'84.
* I wish to receive further information.
------------------------------
End of PROLOG Digest
********************
∂26-Sep-83 0913 ELYSE@SU-SCORE.ARPA Letter from Baudoin
Received: from SU-SCORE by SU-AI with TCP/SMTP; 26 Sep 83 09:13:05 PDT
Date: Mon 26 Sep 83 09:13:37-PDT
From: Elyse Krupnick <ELYSE@SU-SCORE.ARPA>
Subject: Letter from Baudoin
To: faculty@SU-SCORE.ARPA
Reply-To: to Golub
Stanford-Phone: (415) 497-9746
Here is a copy of the letter sent by Claude Baudoin concerning alumni funds.
-Gene.
Dear Mr. Brown,
I want to thank you for our personal letter of Dec. 14. Although I would have
sent my contribution to this year's fund drive shortly anyway, your letter came at the right time and I mailed my check the next day. The fact that I am now
a U.S. resident (working for Schlumberger in Austin after working for them in Paris for 5 years) gives me the addid incentive of tax deductions which I could
not claim as a French resident.
May I seize this opportunity to suggest to you a few ways in which the
university could attract (still) more support from its alumni? Mostly, I
think (and I may be wrong) that many alumni have a stronger rapport to their
academic department (or school) than they have to the university as such.
However, departments do little to support the continued interest of the
alumni in the university.
-dpartments do not maintain communication lines open with their alumni; they
should inform us of professor assignments, PhD's granted (some of the persons
concerned were classmates, especially in the first few years after we left
campus), the evilution of the course mis, new buildings and lab facilities,
etc. I received one such letter from the Computer Science Dept. in 1975,
nothing since then. Only the tip of the iceberg shows in the Stanford
Observer or the Stanford Magazine.
-departments, and the university as a whole, should try to use the good will of
alumni in other ways than financial, especially in order to attract talent to
Stanford. In 8 years since I left Stanford, I have never received a single
inquiry from a prospective student referred to me by the CS dept.
-departments should not be indifferent to alumni when they visit Stanford.
I know that both professors and students have an extremely busy schedule,
but I met a general, definite and demotivating "please do not disturb"
attitude when I happened to visit (which because I lived in France occurred
only twice in 8 years anyway).
I will continue to be a faithful supporter of Stanford, because I have a high
esteem fro the Unviersity and I am grateful to it for what it taught me. I
would be still more enthusiastic when opening my checkbook, and other alumni
who currently do not contribute might start doing so, if we felt we were
still considered part of the Stanford community-at-large, instead of being
just people to whom the administration writes every year when time comes to get
more money.
I hope you did not object to the direct way in which I expressed my thoughts.
You have my best wishes for success in your current effort.
Sincerely,
C. Baudoin
-------
∂26-Sep-83 0936 SCHREIBER@SU-SCORE.ARPA Where
Received: from SU-SCORE by SU-AI with TCP/SMTP; 26 Sep 83 09:35:54 PDT
Date: Mon 26 Sep 83 09:36:01-PDT
From: Robert Schreiber <SCHREIBER@SU-SCORE.ARPA>
Subject: Where
To: faculty@SU-SCORE.ARPA
I intend to give one substantial programming assignment in CS137A this fall.
Must I require all the students to do their programs on LOTS? Can
students with access to their own computers, or those belonging to their
companies, do the assignment there?
Rob
-------
∂26-Sep-83 0949 SHARON@SU-SCORE.ARPA Prof. Misra
Received: from SU-SCORE by SU-AI with TCP/SMTP; 26 Sep 83 09:48:50 PDT
Date: Mon 26 Sep 83 09:49:33-PDT
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: Prof. Misra
To: CSD-Faculty: ;
I am trying to locate a visiting professor by the name of Misra. He has
received quite a bit of mail and a couple phone calls, but I have not
been informed that he is in our department. Please contact me if you
know where he can be located.
Thanks, Sharon
-------
∂26-Sep-83 1012 @SU-SCORE.ARPA:REG@SU-AI
Received: from SU-SCORE by SU-AI with TCP/SMTP; 26 Sep 83 10:12:42 PDT
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Mon 26 Sep 83 10:13:00-PDT
Date: 26 Sep 83 1010 PDT
From: Ralph Gorin <REG@SU-AI>
To: Faculty@SU-SCORE
Instructional Computing
In repsonse to Rob Schreiber's question,
In my opinion, no instructor can require the use of LOTS to the exclusion
of computers owned by students, their employers, their research projects,
etc. However, it's the policy of the Computer Science Department that
departmental facilities must not be used for coursework, for two reasons:
First, we must not provide a facility by which our students are given
an academic advantage over students from other departments who
are taking our courses.
Second, it is research, not instruction, that pays for our facilities.
Exceptions to this general policy can be made. Such exceptions
must answer both objections stated above. Thus, an entire class must be
granted the use of a CF machine, and arrangements to pay for such use
must be made. Generally, the department Chairman, with advice from
the faculty and CF, will determine when exceptions may be allowed.
Ralph
∂26-Sep-83 1419 @SU-SCORE.ARPA:reid@Glacier Re: Where
Received: from SU-SCORE by SU-AI with TCP/SMTP; 26 Sep 83 14:19:03 PDT
Delivery-Notice: While sending this message to SU-AI.ARPA, the
SU-SCORE.ARPA mailer was obliged to send this message in 50-byte
individually Pushed segments because normal TCP stream transmission
timed out. This probably indicates a problem with the receiving TCP
or SMTP server. See your site's software support if you have any questions.
Received: from Glacier by SU-SCORE.ARPA with TCP; Mon 26 Sep 83 14:18:49-PDT
Date: Monday, 26 September 1983 14:17:15-PDT
To: Robert Schreiber <SCHREIBER@SU-SCORE.ARPA>
Cc: faculty@SU-SCORE.ARPA
Subject: Re: Where
In-Reply-To: Your message of Mon 26 Sep 83 09:36:01-PDT.
From: Brian Reid <reid@Glacier>
I have always permitted students to use for classwork any computer
whose management allows them to do their classwork on it. CSD-CF's
management does not permit student to do classwork on CSD research
machines. Student-owned personal computers and many company-owned
computers fall into the "permitted" category.
Brian
∂26-Sep-83 1436 ELYSE@SU-SCORE.ARPA Agenda for Faculty Meeting Tomorrow
Received: from SU-SCORE by SU-AI with TCP/SMTP; 26 Sep 83 14:36:30 PDT
Date: Mon 26 Sep 83 14:37:11-PDT
From: Elyse Krupnick <ELYSE@SU-SCORE.ARPA>
Subject: Agenda for Faculty Meeting Tomorrow
To: faculty@SU-SCORE.ARPA
cc: bosack@SU-SCORE.ARPA, gorin@SU-SCORE.ARPA, YM@SU-AI.ARPA, op@SU-AI.ARPA,
reges@SU-SCORE.ARPA, rindfleisch@SUMEX-AIM.ARPA, scott@SU-SCORE.ARPA,
tajnai@SU-SCORE.ARPA, mwalker@SU-SCORE.ARPA, yearwood@SU-SCORE.ARPA
Stanford-Phone: (415) 497-9746
@begin(verbatim)
AGENDA
FACULTY MEETING
September 27, 1983
Room 146 - MJH
@u(Presenter) @u(Approx. time)
1. Presentation of Degree Candidates Walker 10 mins.
2. Selected Committee Reports
Admissions Reid 5 mins.
Computer Forum Lenat/Tajnai 5 mins.
Computer Facilities Bosack 5 mins.
3. Finances Scott 5 mins.
4. Appointments
a) Manolis Katevenis
-Assistant Professor
b) Jussi Ketonen
-Senior Research Associate
c) Leo Guibas
-Consulting Associate Professor
Golub 10 mins.
5. Space Policy Yearwood 15 mins.
6. Computer Usage Policy Ullman/Golub 10 mins.
7. Announcements Golub 10 mins.
8. New Business Golub 15 mins.
cc: L. Bosack
R. Gorin
H. Llull
Y. Malachi
O. Patashnik
S. Reges
T. Rindfleisch
B. Scott
C. Tajnai
R. Treitel
M. Walker
M. Yearwood
@end(verbatim)
-------
∂26-Sep-83 1536 @SU-SCORE.ARPA:OR.STEIN@SU-SIERRA.ARPA Re: Colloquium
Received: from SU-SCORE by SU-AI with TCP/SMTP; 26 Sep 83 15:36:34 PDT
Received: from SU-SIERRA.ARPA by SU-SCORE.ARPA with TCP; Mon 26 Sep 83 15:36:51-PDT
Date: Mon 26 Sep 83 15:36:43-PDT
From: Gail Stein <OR.STEIN@SU-SIERRA.ARPA>
Subject: Re: Colloquium
To: LENAT@SU-SCORE.ARPA, faculty@SU-SCORE.ARPA
cc: OR.STEIN@SU-SIERRA.ARPA
In-Reply-To: Message from "Doug Lenat <LENAT@SU-SCORE.ARPA>" of Thu 15 Sep 83 13:22:02-PDT
T.C. Hu from UC San Diego is interested in presenting a lecture during October. Perhaps you should contact him. --- Gail Stein
-------
∂26-Sep-83 1540 @SU-SCORE.ARPA:FY@SU-AI reception at Don Knuth's home
Received: from SU-SCORE by SU-AI with TCP/SMTP; 26 Sep 83 15:40:27 PDT
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Mon 26 Sep 83 15:40:26-PDT
Date: 26 Sep 83 1538 PDT
From: Frank Yellin <FY@SU-AI>
Subject: reception at Don Knuth's home
To: faculty@SU-SCORE
Prof. Donald Knuth will be hosting a reception for all new graduate students
at his home this Saturday at noon.
All faculty members and their spouses are invited to attend and meet the
new students.
If you are wish to attend, please RSVP to me (FY@SAIL, YELLIN@SCORE)
by this Wednesday.
-- Frank Yellin
Orientation Committee
P.S. If you are hosting any visiting faculty, please pass this message along
to them. Thanks.
-------
∂26-Sep-83 2348 LAWS@SRI-AI.ARPA AIList Digest V1 #64
Received: from SRI-AI by SU-AI with TCP/SMTP; 26 Sep 83 23:47:27 PDT
Date: Monday, September 26, 1983 9:28PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #64
To: AIList@SRI-AI
AIList Digest Tuesday, 27 Sep 1983 Volume 1 : Issue 64
Today's Topics:
Database Systems - DBMS Software Available,
Symbolic Algebra - Request for PRESS,
Humor - New Expert Systems,
AI at Edinburgh - Michie & Turing Institute,
Rational Psychology - Definition,
Halting Problem & Learning,
Knowledge Representation - Course Announcement
----------------------------------------------------------------------
Date: 21 Sep 83 16:17:08-PDT (Wed)
From: decvax!wivax!linus!philabs!seismo!hao!csu-cs!denelcor!pocha@Ucb-Vax
Subject: DBMS Software Available
Article-I.D.: denelcor.150
Here are 48 vendors of the most popular DBMS packages which will be presented
at the National Database & 4th Generation Language Symposium.
Boston, Dec. 5-8 1983, Radisson-Ferncroft Hotel, 50 Ferncroft Rd., Davers, Ma
For information write. Software Institute of America, 339 Salem St, Wakefield
Mass 01880 (617)246-4280.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Applied Data Research DATACOM, IDEAL |Mathamatica Products RAMIS II
Battelle - - - - - - - BASIS |Manager Software Prod. DATAMANAGER
Britton-Lee IDM | DESIGNMANAGER
Cincom Systems TIS, TOTAL, | SOURCEMANAGER
MANTIS |National CSS, Inc. NOMAD2
Computer Associates CA-UNIVERSE |Oracle Corp. ORACLE
Computer Co. of America MODEL 204 |Perkin-Elmer RELIANCE
PRODUCT LINE |Prime Computer PRIME DBMS
Computer Techniques QUEO-IV | INFORMATION
Contel - - - - - - - - RTFILE |Quassar Systems POWERHOUSE
Cullinet Software IDMS, ADS | POWERPLAN
Database Design, Inc. DATA DESIGNER |Relational Tech. Inc. INGRES
Data General DG/DBMS |Rexcom Corp. REXCOM
PRESENT |Scientific Information SIR/DBMS
Digital Equipment Co. VAX INFO. ARCH |Seed Software SEED
Exact Systems & Prog. DNA-4 |Sensor Based System METAFILE
Henco Inc. INFO |Software AG of N.A. ADABAS
Hewlett Packard IMAGE |Software House SYSTEM 1022
IBM Corp. SQL/DS, DB2 |Sydney Development Co. CONQUER
Infodata Systems INQUIRE |Tandem Computers ENCOMPASS
Information Builders FOCUS |Tech. Info. Products IP/3
Intel Systems Corp. SYSTEM 2000 |Tominy, Inc. DATA BASE-PLUS
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
John Pocha
Denelcor, Inc.
17000 E. Ohio Place
Aurora, Colorado 80017
work (303)337-7900 x379
home (303)794-5190
{csu-cs|nbires|brl-bmd}!denelcor!pocha
------------------------------
Date: 23 Sep 83 19:04:12-PDT (Fri)
From: decvax!tektronix!tekchips!wm @ Ucb-Vax
Subject: Request for PRESS
Article-I.D.: tekchips.317
Does anyone know where I can get the PRESS algebra system, by Alan
Bundy, written in Prolog?
Wm Leler
tektronix!tekchips!wm
wm.Tektronix@Rand-relay
------------------------------
Date: 23 Sep 83 1910 EDT (Friday)
From: Jeff.Shrager@CMU-CS-A
Subject: New expert systems announced:
Dentrol: A dental expert system based upon tooth maintenance
principles.
Faust: A black magic advisor with mixed initiative goal generation.
Doug: A system which will convert any given domain into set theory.
Cray: An expert arithmetic advisor. Heuristics exist for any sort of
real number computation involving arithmetic functions (+, -,
and several others) within a finite (but large) range around 0.0.
The heuristics are shown to be correct for typical cases.
Meta: An expert at thinking up new domains in which there should be
expert systems.
Flamer: A expert at seeming to be an expert in any domain in which it
is not an expert.
IT: (The Illogic Theorist) A expert at fitting any theory to any quanity
of protocol data. Theories must be specified in "ITLisp" but IT can
construct the protocols if need be.
------------------------------
Date: 22 Sep 83 23:25:15-PDT (Thu)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: U of Edinburgh, Scotland Inquiry - (nf)
Article-I.D.: uiucdcs.2935
I can't tell you about the Dept of AI at Edinburgh, but I do know
about the Machine Intelligence Research Unit chaired by Prof. Donald
Michie.
The MIRU will fold in future, because Prof Michie intends to set up a
new research institute in the UK. He's been planning this and fighting
for it for quite a while now. It will be called the "Turing
Institute", and is intended to become one of the prime centers of AI
research in the UK. In fact, it will be one of the very few centers at
which research is the top priority, rather than teaching. Michie has
recently been approached by the University of Strathclyde near
Glasgow, which is interested in functioning as the associated teaching
institution (cp SRI and Stanford). If that works out, the Turing
Institute may be operational by September 1984.
------------------------------
Date: 23 Sep 83 5:04:46-PDT (Fri)
From: decvax!microsoft!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: ssc-vax.538
(should be posting from utah, but I saw it here first and just
couldn't resist...)
I think we've got a terminology problem here. The word "rational" is
so heavily loaded that it can hardly move! (as net.philosophy readers
well know). The term "rational psychology" does seem to exclude
non-rational behavior (whatever that is) from consideration, which is
not true at all. Rather, the idea is to explore the entire universe
of possibilities for intelligent behavior, rather than restricting
oneself to observing the average college sophomore or the AI programs
small enough to fit on present-day machines.
Let me propose the term "universal psychology" as a substitute,
analogous to the mathematical study of universal algebras. Fewer
connotations, and it better suggests the real thrust of this field -
the study of *possible* intelligent behavior.
stan the r.h. (of lightness)
ssc-vax!sts
(but mail to harpo!utah-cs!shebs)
------------------------------
Date: 26 Sep 1983 0012-PDT
From: Jay <JAY@USC-ECLC>
Subject: re: the halting problem, orders of learning
Certain representaions of calculations lead to easy
detection of looping. Consider the function...
f(x) = x
This could lead to ...
f(f(x)) = x
Or to ...
f(f(f(f( ... )))) = x
But why bother! Or for another example, consider the life blinker..
+
+ + + becomes + becomes + + + becomes (etc.)
+
Why bother calculateing all the generations for this arangement? The
same information lies in ...
for any integer i +
Blinker(2i) = + + + and Blinker(2i+1) = +
+
There really is no halting problem, or infinite looping. The
information for the blinker need not be fully decoded, it can be just
the above "formulas". So humans could choses a representation of
circular or "infinite looping" ideas, so that the circularity is
expresed in a finite number of bits.
As for the orders of learning; Learning(1) is a behavior. That is
modifying behaivor is a behavior. It can be observed in schools,
concentration camps, or even in the laboratory. So learning(2) is
modifying a certain behavior, and thus nothing more (in one view)
than learning(1). Indeed it is just learning(1) applied to itself!
So learning(i) is just
i
(the way an organism modifies) its behavior
But since behavior is just the way an organism modifies the
enviroment,
i+1
Learning(i) = (the way an organism modifies) the enviroment.
and learning(0) is just behavior. So depending on your view, there
are either an infinite number of ways to learn, or there are an
infinite number of organisms (most of whose enviroments are just other
organisms).
j'
------------------------------
Date: Mon 26 Sep 83 11:48:33-MDT
From: Jed Krohnfeldt <KROHNFELDT@UTAH-20.ARPA>
Subject: Re: learning levels, etc.
Some thoughts about Stan Shebs' questions:
I think that your continuum of 1st order learning, 2nd order learning,
etc. can really be collapsed to just two levels - the basic learning
level, and what has been popularly called the "meta level". Learning
about learning about learning, is really no different than learning
about learning, is it? It is simply a capability to introspect (and
possibly intervene) into basic learning processes.
This also proposes an answer to your second question - why don't
humans go catatonic when presented with circular definitions - the
answer may be that we do have heuristics, or meta-level knowledge,
that prevents us from endlessly looping on circular concepts.
Jed Krohnfeldt
utah-cs!jed
krohnfeldt@utah-20
------------------------------
Date: Mon 26 Sep 83 10:44:34-PDT
From: Bob Moore <BMOORE@SRI-AI.ARPA>
Subject: course announcement
COURSE ANNOUNCEMENT
COMPUTER SCIENCE 400
REPRESENTATION, MEANING, AND INFERENCE
Instructor: Robert Moore
Artificial Intelligence Center
SRI International
Time: MW @ 11:00-12:15 (first meeting Wed. 9/28)
Place: Margaret Jacks Hall, Rm. 301
The problem of the formal representation of knowledge in intelligent
systems is subject to two important constraints. First, a general
knowledge-representation formalism must be sufficiently expressive to
represent a wide variety of information about the world. A long-term
goal here is the ability to represent anything that can be expressed
in natural language. Second, the system must be able to draw
inferences from the knowledge represented. In this course we will
examine the knowledge representation problem from the perspective of
these constraints. We will survey techniques for automatically
drawing inferences from formalizations of commonsense knowledge; we
will look at some of the aspects of the meaning of natural-language
expressions that seem difficult to formalize (e.g., tense and aspect,
collective reference, propositional attitudes); and we will consider
some ways of bridging the gap between formalisms for which the
inference problem is fairly well understood (first-order predicate
logic) and the richer formalisms that have been proposed as meaning
representations for natural language (higher-order logics, intentional
and modal logics).
------------------------------
End of AIList Digest
********************
∂27-Sep-83 1053 GOLUB@SU-SCORE.ARPA today's meeting
Received: from SU-SCORE by SU-AI with TCP/SMTP; 27 Sep 83 10:53:38 PDT
Date: Tue 27 Sep 83 10:53:52-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: today's meeting
To: faculty@SU-SCORE.ARPA
Please try to be at the faculty meeting promptly at 1:15. We have
a number of matters to discuss. GENE
-------
∂27-Sep-83 1552 @SU-SCORE.ARPA:FY@SU-AI department-wide reception
Received: from SU-SCORE by SU-AI with TCP/SMTP; 27 Sep 83 15:52:10 PDT
Delivery-Notice: While sending this message to SU-AI.ARPA, the
SU-SCORE.ARPA mailer was obliged to send this message in 50-byte
individually Pushed segments because normal TCP stream transmission
timed out. This probably indicates a problem with the receiving TCP
or SMTP server. See your site's software support if you have any questions.
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Tue 27 Sep 83 15:50:31-PDT
Date: 27 Sep 83 1547 PDT
From: Frank Yellin <FY@SU-AI>
Subject: department-wide reception
To: su-bboards@SU-AI, students@SU-SCORE, faculty@SU-SCORE,
staff@SU-SCORE
There will be a department-wide reception this Thursday at 5pm in
the courtyard behind Margaret Jacks and psychology.
There will be a wide selection of hors d'oevres, cheeses, drinks, and other
munchies.
The purpose of the reception is both to welcome new students and to present
the Forsyth Award for excellence in teaching.
All faculty, students, and staff are invited.
-- Frank Yellin
Orientation Committee
∂27-Sep-83 1740 GOLUB@SU-SCORE.ARPA Wirth's visit
Received: from SU-SCORE by SU-AI with TCP/SMTP; 27 Sep 83 17:40:38 PDT
Date: Tue 27 Sep 83 17:40:35-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Wirth's visit
To: faculty@SU-SCORE.ARPA
Nicolas Wirth will be arriving on Oct 3 and staying with me until
Oct 9. He'll be giving the first colloquium on Tuesday, Oct 4.
I am proposing that we go out for dinner with him the evening of Oct 4.
We could have drinks at my house at 6:15 and dinner about 7:30.
I don't know where the dinner will be. Please let me
know if you would like to come. I would think the meal would cost
between $15 and $20. Wives, girlfriends, etc are free to come too.
Please let me know if you are interested.
GENE
-------
∂28-Sep-83 0755 rita@su-score [Rita Leibovitz <RITA@Score>: Accepted Our Offer Ph.D./MS]
Received: from SU-SHASTA by SU-AI with PUP; 28-Sep-83 07:54 PDT
Received: from Score by Shasta with TCP; Wed Sep 28 07:56:35 1983
Date: Tue 27 Sep 83 16:37:51-PDT
From: Rita Leibovitz <RITA@SU-SCORE.ARPA>
Subject: [Rita Leibovitz <RITA@Score>: Accepted Our Offer Ph.D./MS]
To: admissions@SU-SHASTA.ARPA, yearwood@SU-SCORE.ARPA
Stanford-Phone: (415) 497-4365
Please note that Michael Mills, who previously accepted our offer as an Ph.D.
student, has withdrawn as of 9/26/83.
Suzanne Mueller, HCP from Intel, was added to the CSMS program.
rita
---------------
Received: from Shasta by Score with Pup; Mon 27 Jun 83 10:05:52-PDT
Received: from Score by Shasta with PUP; Mon, 27 Jun 83 10:05 PDT
Date: Mon 27 Jun 83 10:05:26-PDT
From: Rita Leibovitz <RITA@Score>
Subject: Accepted Our Offer Ph.D./MS
To: admissions@Shasta
cc: yearwood@Score
Stanford-Phone: (415) 497-4365
The following two lists are the Ph.D. and CSMS applicants who have accepted
our offer, as of 9/27/83.
9/27/83 PHD APPLICANTS WHO HAVE ACCEPTED OUR OFFER (20)
MALE = 16 FEMALE = 4
LAST FIRST SEX MINORITY INT1 INT2
---- ----- --- -------- ---- ----
ABADI MARTIN M MTC AI
BLATT MIRIAM F VLSI PSL
CARPENTER CLYDE M PSL OS
CASLEY ROSS M MTC PSL
DAVIS HELEN F DCS VLSI
HADDAD RAMSEY M AI UN
HALL KEITH M UN
KELLS KATHLEEN F AI
KENT MARK M NA OR
LAMPING JOHN M AI PSL
LARRABEE TRACY F PSL AI
MC CALL MICHAEL M PSL CL
PALLAS JOSEPH M PSL OS
ROY SHAIBAL M VLSI DCS
SANKAR SRIRAM M PSL OS
SCHAFFER ALEJANDRO M HISPANIC AA CM
SHIEBER STUART M CL AI
SUBRAMANIAN ASHOK M AI NETWORKS
SWAMI ARUN NARASIMHA M PSL MTC
TJIANG WENG KIANG M PSL OS
9/22/83 CSMS APPLICANTS WHO HAVE ACCEPTED OUR OFFER (46)
MALE = 37 FEMALE = 9 DEFERRED = 3
LAST FIRST SEX COTERM DEPT. MINORITY
---- ----- --- ------ ----- --------
ANDERSON ALLAN M
ANDERSON STEVEN M
BENNETT DON M
BERNSTEIN DAVID M
BION JOEL M PHILOSOPHY (DEFER UNTIL 9/84)
BRAWN BARBARA F
CAMPOS ALVARO M
CHAI SUN-KI M ASIAN
CHEHIRE WADIH M
CHEN GORDON M ASIAN
COCHRAN KIMBERLY F
COLE ROBERT M
COTTON TODD M MATH
DICKEY CLEMENT M
ETHERINGTON RICHARD M
GARBAGNATI FRANCESCO M
GENTILE CLAUDIO M
GOLDSTEIN MARK M
HARRIS PETER M
HECKERMAN DAVID M
HUGGINS KATHERINE F
JAI HOKIMI BASSIM M
JONSSON BENGT M
JULIAO JORGE M
LEO YIH-SHEH M
LEWINSON JAMES M MATH
LOEWENSTEIN MAX M
MARKS STUART M E.E. ASIAN (DEFER 4/84)
MUELLER SUZANNE F
MULLER ERIC M
PERKINS ROBERT M CHEMISTRY
PERNICI BARBARA F
PONCELEON DULCE F
PORAT RONALD M
PROUDIAN DEREK M ENGLISH/COG.SCI
REUS EDWARD M
SCOGGINS JOHN M MATH. SCIENCE
SCOTT KIMBERLY F
VELASCO ROBERTO M
VERDONK BRIGITTE F
WENOCUR MICHAEL M
WICKSTROM PAUL M
WU LI-MEI F
WU NORBERT M ELEC. ENGIN. ASIAN (DEFER 9/84)
YOUNG KARL M
YOUNG PAUL M
-------
-------
∂28-Sep-83 1557 @SU-SCORE.ARPA:DEK@SU-AI Lemons
Received: from SU-SCORE by SU-AI with TCP/SMTP; 28 Sep 83 15:57:03 PDT
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Wed 28 Sep 83 15:57:51-PDT
Date: 28 Sep 83 1555 PDT
From: Don Knuth <DEK@SU-AI>
Subject: Lemons
To: faculty@SU-SCORE
I'm planning to make some lemonade, to serve at the party Jill and I are
giving for faculty and new students next Saturday.
That means I will need 64 lemons.
When Jill and I lived in Southern California we had a lemon tree, and
we were always trying to figure out what to do with all the lemons it
produced. It occurs to me that one of you might have such a tree and
such a problem. If so, I'll be glad to come and pick as many as you want
to get rid of. If not, I'll support America's lemon industry.
(Please don't bother to reply unless you have an overfull lemon tree!)
∂29-Sep-83 1120 LAWS@SRI-AI.ARPA AIList Digest V1 #65
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Sep 83 11:19:49 PDT
Date: Thursday, September 29, 1983 9:46AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #65
To: AIList@SRI-AI
AIList Digest Thursday, 29 Sep 1983 Volume 1 : Issue 65
Today's Topics:
Automatic Translation - French-to-English Request,
Music and AI - Request,
Publications - CSLI Newsletter & Apollo User's Mailing List,
Seminar - Parallel Algorithms: Cook at UTexas Oct. 6,
Lab Reports - UM Expansion,
Software Distributions - Maryland Franz Lisp Code,
Conferences - Intelligent Sys. and Machines, CSCSI,
----------------------------------------------------------------------
Date: Wed 28 Sep 83 11:37:27-PDT
From: David E.T. Foulser <FOULSER@SU-SCORE.ARPA>
Subject: Re: Automatic Translation
I'm looking for a program to perform automatic translation from
French to English. The output doesn't have to be perfect (I hardly
expect it). I'll appreciate any leads you can give me.
Dave Foulser
------------------------------
Date: Wed 28 Sep 83 18:46:09-EDT
From: Ted Markowitz <TJM@COLUMBIA-20.ARPA>
Subject: Music & AI, pointers wanted
I'd like to hear from anyone doing work that somehow relates AI and
music in some fashion. Particularly, are folks using AI programs and
techniques in composition (perhaps as a composer's assistant)? Any
responses will be passed on to those interested in the results.
--ted
------------------------------
Date: Mon 26 Sep 83 12:08:44-CDT
From: Lauri Karttunen <Cgs.Lauri@UTEXAS-20.ARPA>
Subject: CSLI newsletter
[Reprinted from the UTexas-20 bboard.]
A copy of the first newsletter from the Center for the Study of
Language and Information (CSLI) at Stanford is in
PS:<CGS.PUB>CSLI.NEWS. The section on "Remote Affiliates" is of some
interest to many people here.
------------------------------
Date: Thu, 22 Sep 83 14:29:56 EDT
From: Nathaniel Mishkin <Mishkin@YALE.ARPA>
Subject: Apollo Users Mailing List
This message is to announce the creation of a new mailing list:
Apollo@YALE
in which I would like to include all users of Apollo computers who are
interested in sharing their experiences about Apollos. I think all
people could benefit from finding out what other people are doing on
their Apollos.
Mail to the list will be archived in some public place that I will
announce at a later date. At least initially, the list will not be
moderated or digested. If the volume is too great, this may change.
If you are interested in getting on this mailing list, send mail to:
Apollo-Request@YALE
If several people at your site are interested in being members and
your mail system supports local redistribution, please tell me so I
can add a single entry (e.g. "Apollo-Podunk@PODUNK") instead of one
for each person.
------------------------------
Date: Mon 26 Sep 83 16:44:31-CDT
From: CS.GLORIA@UTEXAS-20.ARPA
Subject: Cook Colloquium, Oct 6
[Reprinted from the UTexas-20 bboard.]
Stephen A. Cook, University of Toronto, will present a talk entitled
"Which Problems are Subject to Exponential Speed-up by Parallel Computers?"
on Thursday, Oct. 6 at 3:30 p.m. in Painter Hall 4.42.
Abstract:
In the future we expect large parallel computers to exist with
thousands or millions of processors able to work together on a single
problem. There is already a significant literature of published algorithms
for such machines in which the number of processors available is treated
as a resource (generally polynomial in the input size) and the computation
time is extremely fast (polynomial in the logarithm of the input size).
We shall give many examples of problems for which such algorithms exist
and classify them according to the kind of algirithm which can be used.
On the other hand, we will give examples of problems with feasible sequential
algorithms which appear not to be amenable to such fast parallel algorithms.
------------------------------
Date: 21 Sep 83 16:33:08 EDT (Wed)
From: Mark Weiser <mark%umcp-cs@UDel-Relay>
Subject: UM Expansion
[Due to a complaint that even academic job ads constitute an
"egregious violation" of Arpanet standards, and following failure of
anyone to reply to my subsequent queries, I have decided to publish
general notices of lab expansions but not specific positions. The
following solicitation has been edited accordingly. -- KIL]
The University of Maryland was recently awarded 4.2 million dollars
by the National Science Foundation to develop the hardware and
software for a parallel processing laboratory. More than half of
the award amount is going directly for hardware acquisition, and
this money is also being leveraged through substantial vendor
discounts and joint research programs now being negotiated. We
will be buying things like lots of Vaxes, Sun's, Lisp Machines,
etc., to augment our current 2 780's, ethernet, etc. system.
Several new permanent positions are being created in the Computer
Science Department for this laboratory.
[...]
Anyone interested should make initial inquiries, send resumes, etc.
to Mark Weiser at one of the addresses below:
Mark Weiser
Computer Science Department
University of Maryland
College Park, MD 20742
(301) 454-6790/4251/6291 (in that order).
UUCP: {seismo,allegra,brl-bmd}!umcp-cs!mark
CSNet: mark@umcp-cs
ARPA: mark.umcp-cs@UDel-Relay
------------------------------
Date: 26 Sep 83 17:32:04-PDT (Mon)
From: decvax!mcvax!philabs!seismo!rlgvax!cvl!umcp-cs!liz @ Ucb-Vax
Subject: Maryland software distribution
Article-I.D.: umcp-cs.2755
This is to announce the availability of the Univ of Maryland software
distribution. This includes source code for the following:
1. The flavors package written in Franz Lisp. This package has
been used successfully in a number of large systems at Maryland,
and while it does not implement all the features of Lisp Machine
Flavors, the features present are as close to the Lisp Machine
version as possible within the constraints of Franz Lisp.
(Note that Maryland flavors code *can* be compiled.)
2. Other Maryland Franz hacks including the INTERLISP-like top
level, the lispbreak error handling package, the for macro and
the new loader package.
3. The YAPS production system written in Franz Lisp. This is
similar to OPS5 but more flexible in the kinds of lisp expressions
that may appear as facts and patterns (sublists are allowed
and flavor objects are treated atomically), the variety of
tests that may appear in the left hand sides of rules and the
kinds of actions may appear in the right hand sides of rules.
In addition, YAPS allows multiple data bases which are flavor
objects and may be sent messages such as "fact" and "goal".
4. The windows package in the form of a C loadable library. This
flexible package allows convenient management of multiple
contexts on the screen and runs on ordinary character display
terminals as well as bit-mapped displays. Included is a Franz
lisp interface to the window library, a window shell for
executing shell processes in windows, and a menu package (also
a C loadable library).
You should be aware of the fact that the lisp software is based on
Franz Opus 38.26 and that we will be switching to the newer version
of lisp that comes with Berkeley 4.2 whenever that comes out.
---------------------------------------------------------------------
To obtain the Univ of Maryland distribution tape:
1. Fill in the form below, make a hard copy of it and sign it.
2. Make out a check to University of Maryland Foundation for $100,
mail it and the form to:
Liz Allen
Univ of Maryland
Dept of Computer Science
College Park MD 20742
3. If you need an invoice, send me mail, and I will get one to you.
Don't forget to include your US Mail address.
Upon receipt of the money, we will mail you a tape containing our
software and the technical reports describing the software. We
will also keep you informed of bug fixes via electronic mail.
---------------------------------------------------------------------
The form to mail to us is:
In exchange for the Maryland software tape, I certify to the
following:
a. I will not use any of the Maryland software distribution in a
commercial product without obtaining permission from Maryland
first.
b. I will keep the Maryland copyright notices in the source code,
and acknowledge the source of the software in any use I make of
it.
c. I will not redistribute this software to anyone without permission
from Maryland first.
d. I will keep Maryland informed of any bug fixes.
e. I am the appropriate person at my site who can make guarantees a-d.
Your signature, name, position,
phone number, U.S. and electronic
mail addresses.
---------------------------------------------------------------------
If you have any questions, etc, send mail to me.
--
-Liz Allen, U of Maryland, College Park MD
Usenet: ...!seismo!umcp-cs!liz
Arpanet: liz%umcp-cs@Udel-Relay
------------------------------
Date: Tue, 27 Sep 83 14:57:00 EDT
From: Morton A. Hirschberg <mort@brl-bmd>
Subject: Conference Announcement
**************** CONFERENCE ****************
"Intelligent Systems and Machines"
Oakland University, Rochester Michigan
April 24-25, 1984
*********************************************
A notice for call for papers should also appear through SIGART soon.
Conference Chairmen: Dr. Donald Falkenburg (313-377-2218)
Dr. Nan Loh (313-377-2222)
Center for Robotics and Advanced Automation
School of Engineering
Oakland University
Rochester, MI 48063
***************************************************
AUTHORS PLEASE NOTE: A Public Release/Sensitivity Approval is necessary.
Authors from DOD, DOD contractors, and individuals whose work is government
funded must have their papers reviewed for public release and more
importantly sensitivity (i.e. an operations security review for sensitive
unclassified material) by the security office of their sponsoring agency.
In addition, I will try to answer questions for those on the net. Mort
Queries can be sent to mort@brl
------------------------------
Date: Mon 26 Sep 83 11:08:58-PDT
From: Ray Perrault <RPERRAULT@SRI-AI.ARPA>
Subject: CSCSI call for papers
CALL FOR PAPERS
C S C S I - 8 4
Canadian Society for
Computational Studies of Intelligence
University of Western Ontario
London, Ontario
May 18-20, 1984
The Fifth National Conference of the CSCSI will be held at
the University of Western Ontario in London, Canada. Papers are
requested in all areas of AI research, particularly those listed
below. The Program Committee members responsible for these areas
are included.
Knowledge Representation :
Ron Brachman (Fairchild R & D), John Mylopoulos (U of Toronto)
Learning :
Tom Mitchell (Rutgers U), Jaime Carbonell (CMU)
Natural Language :
Bonnie Weber (U of Pennsylvania), Ray Perrault (SRI)
Computer Vision :
Bob Woodham (U of British Columbia), Allen Hanson (U Mass)
Robotics :
Takeo Kanade (CMU), John Hollerbach (MIT)
Expert Systems and Applications :
Harry Pople (U of Pittsburgh), Victor Lesser (U Mass)
Logic Programming :
Randy Goebel (U of Waterloo), Veronica Dahl (Simon Fraser U)
Cognitive Modelling :
Zenon Pylyshyn, Ed Stabler (U of Western Ontario)
Problem Solving and Planning :
Stan Rosenschein (SRI), Drew McDermott (Yale)
Authors are requested to prepare Full papers, of no more
than 4000 words in length, or Short papers of no more than 2000
words in length. A full page of clear diagrams counts as 1000
words. When submitting, authors must supply the word count as
well as the area in which they wish their paper reviewed. (Com-
binations of the above areas are acceptable). The Full paper
classification is intended for well-developed ideas, with signi-
ficant demonstration of validity, while the Short paper classifi-
cation is intended for descriptions of research in progress. Au-
thors must ensure that their papers describe original contribu-
tions to or novel applications of Artificial Intelligence, re-
gardless of length classification, and that the research is prop-
erly compared and contrasted with relevant literature.
Three copies of each submitted paper must be in the hands of
the Program Chairman by December 7, 1983. Papers arriving after
that date will be returned unopened, and papers lacking word
count and classifications will also be returned. Papers will be
fully reviewed by appropriate members of the program committee.
Notice of acceptance will be sent on February 28, 1984, and final
camera ready versions are due on March 31, 1984. All accepted
papers will appear in the conference proceedings.
Correspondence should be addressed to either the General
Chairman or the Program Chairman, as appropriate.
General Chairman Program Chairman
Ted Elcock, John K. Tsotsos
Dept. of Computer Science, Dept. of Computer Science,
Engineering and Mathematical 10 King's College Rd.,
Sciences Bldg., University of Toronto,
University of Western Ontario Toronto, Ontario, Canada,
London, Ontario, Canada M5S 1A4
N6A 5B9 (416)-978-3619
(519)-679-3567
------------------------------
End of AIList Digest
********************
∂29-Sep-83 1438 LAWS@SRI-AI.ARPA AIList Digest V1 #66
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Sep 83 14:37:21 PDT
Date: Thursday, September 29, 1983 12:50PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #66
To: AIList@SRI-AI
AIList Digest Friday, 30 Sep 1983 Volume 1 : Issue 66
Today's Topics:
Rational Psychology - Definition,
Halting Problem
Natural Language Understanding
----------------------------------------------------------------------
Date: Tue 27 Sep 83 22:39:35-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Rational X
Oh dear! "Rational psychology" is no more about rational people than
"rational mechanics" is about rational rocks or "rational
thermodynamics" about rational hot air. "Rational X" is the
traditional name for the mathematical, axiomatic study of systems
inspired and intuitively related to the systems studied by the
empirical science "X." Got it?
Fernando Pereira
------------------------------
Date: 27 Sep 83 11:57:24-PDT (Tue)
From: ihnp4!houxm!hogpc!houti!ariel!norm @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: ariel.463
Actually, the word "rational" in "rational psychology" is merely
redundant. One would hope that psychology would be, as other
sciences, rational. This would in no way detract from its ability to
investigate the causes of human irrationality. No science really
should have to be prefaced with the word "rational", since we should
be able to assume that science is not "irrational". Anyone for
"Rational Chemistry"?
Please note that the scientist's "flash of insight", "intuituion",
"creative leap" is heavily dependent upon the rational faculty, the
faculty of CONCEPT-FORMATION. We also rely upon the rational faculty
for verifying and for evaluating such insights and leaps.
--Norm Andrews, AT&T Information Systems, Holmdel, New Jersey
------------------------------
Date: 26 Sep 83 13:01:56-PDT (Mon)
From: ihnp4!drux3!drufl!samir @ Ucb-Vax
Subject: Rational Psychology
Article-I.D.: drufl.670
Norm,
Let me elaborate. Psychology, or logic of mind, involves BOTH
rational and emotional processes. To consider one exclusively defeats
the purpose of understanding.
I have not read the article we are talking about so I cannot
comment on that article, but an example of what I consider a "Rational
Psychology" theory is "Personal Construct Theory" by Kelly. It is an
attractive theory but, in my opinion, it falls far short of describing
"logic of mind" as it fails to integrate emotional aspects.
I consider learning-concept formation-creativity to have BOTH
rational and emotional attributes, hence it would be better if we
studied them as such.
I may be creating a dichotomy where there is none. (Rational
vs. Emotional). I want to point you to an interesting book "Metaphors
we live by" (I forget the names of Authors) which in addition to
discussing many other ai-related (without mentioning ai) concepts
discusses the question of Objective vs. Subjective, which is similar
to what we are talking here, Rational vs. Emotional.
Thanks.
Samir Shah
AT&T Information Systems, Denver.
drufl!samir
------------------------------
Date: Tue, 27 Sep 1983 13:30 EDT
From: MINSKY@MIT-OZ
Subject: Re: Halting Problem
About learning: There is a lot about how to get out of loops in my
paper "Jokes and the Cognitive Unconscious". I can send it to whoever
wants, either over this net or by U.S. Snail.
-- minsky
------------------------------
Date: 26 Sep 83 10:31:31-PDT (Mon)
From: ihnp4!clyde!floyd!whuxlb!pyuxll!eisx!pd @ Ucb-Vax
Subject: the Halting problem.
Article-I.D.: eisx.607
There are two AI problems that I know about: the computing power
problem (combinatorial explosions, etc) and the "nature of thought"
problem (knowledge representation, reasoning process etc). This
article concerns the latter.
AI's method (call it "m") seems to model human information processing
mechanisms, say legal reasoning methods, and once it is understood
clearly, and a calculus exists for it, programming it. This idea can
be transferred to various problem domains, and voila, we have programs
for "thinking" about various little cubbyholes of knowledge.
The next thing to tackle is, how do we model AI's method "m" that was
used to create all these cubbyhole programs ? How did whoever thought
of Predicate Calculus, semantic networks, Ad nauseum block world
theories come up with them ? Let's understand that ("m"), formalize
it, and program it. This process (let's call it "m'") gives us a
program that creates cubbyhole programs. Yeah, it runs on a zillion
acres of CMOS, but who cares.
Since a human can do more than just "m", or "m'", we try to make
"m''", "m'''" et al. When does this stop ? Evidently it cannot. The
problem is, the thought process that yields a model or simulation of a
thought process is necessarily distinct from the latter (This is true
of all scientific investigation of any kind of phenomenon, not just
thought processes). This distinction is one of the primary paradigms
of western Science.
Rather naively, thinking "about" the mind is also done "with" the
mind. This identity of subject and object that ensues in the
scientific (dualistic) pursuit of more intelligent machine behavior -
do you folks see it too ? Since scientific thought relies on the clear
separation of a theory/model and reality, is a
mathematical/scientific/engineering discipline inadequate for said
pursuit ? Is there a system of thought that is self-describing ? Is
there a non-dualistic calculus ?
What we are talking about here is the ability to separate oneself from
the object/concept/process under study, understand it, model it,
program it... it being anything, including the ability it self. The
ability to recognize that a model is a representation within one's
mind of a reality outside of ones mind. Trying to model this ability
is leads one to infinite regress. What is this ability ? Lets call it
conciousness. What we seem to be coming up with here is, the
INABILITY of math/sci etc to deal with this phenomenon, codify at it,
and to boldly program a computer that has conciousness. Does this mean
that the statement:
"CONCIOUSNESS CAN, MUST, AND WILL ONLY COME TO EXISTENCE OF ITS OWN
ACCORD"
is true ? "Conciousness" was used for lack of a better word. Replace
it by X, and you still have a significant statement. Conciousness
already has come to existence; and according to the line of reasoning
above, cannot be brought into existence by methods available.
If so, how can we "help" machines to achieve conciousness, as
benevolent if rather impotent observers ? Should we just
mechanistically build larger and larger neural network simulators
until one says "ouch" when we shut a portion of it off, and better,
tries to deliberately modify(sic) its environment so that that doesn't
happen again? And may be even can split infinitives ?
As a parting shot, it's clear that such neural networks, must have
tremendous power to come close to a fraction of our level of
abstraction ability.
Baffled, but still thinking... References, suggestions, discussions,
pointers avidly sought.
Prem Devanbu
ATTIS Labs , South Plainfield.
------------------------------
Date: 27 Sep 83 05:20:08 EDT (Tue)
From: rlgvax!cal-unix!wise@SEISMO
Subject: Natural Language Analysis and looping
A side light to the discussions of the halting problem is "what then?"
What do we do when a loop is detected? Ignore the information?
Arbitrarily select some level as the *true* meaning?
In some cases, meaning is drawn from outside the language. As an
example, consider a person who tells you, "I don't know a secret".
The person may really know a secret but doesn't want you to know, or
may not know a secret and reason that you'll assume that nobody with a
secret would say something so suspicious ...
A reasonable assumption would be that if the person said nothing,
you'd have no reason to think he knows a secret, so if that was the
assumption which he desired for you to make, he would just have kept
quiet, so you may conclude that the person knows no secret.
This rather simplistic example demonstrates one response to the loop,
i.e., when confronted with circular logic, we disregard it. Another
possibility is that we may use external information to attempt to help
dis-ambiguate by selecting a level of the loop. (e.g. this is a
three-year-old, who is sufficiently unsophisticated that he may say
the above when he does, in fact, know a secret.)
This may support the study of cognition as an underpinning for NLP.
Certainly we can never expect a machine to react as we (who is 'we'?)
do unless we know how we react.
------------------------------
Date: 28 Sep 1983 1723-PDT
From: Jay <JAY@USC-ECLC>
Subject: NLP, Learning, and knowledge rep.
As an undergraduate student here at USC, I am required to pass a
Freshman Writting class. I have noticed in this class that one field
of the NL Problem is UNSOLVED even in humans. I am speaking of the
generation of prose.
In AI terms the problems are...
The selection of a small area of the knowledge base which is small
enough to be written about in a few pages, and large enough that a
paper can be generated at all.
One of the solutions to this problem is called 'clustering.' In the
middle of a page one draws a circle about the topic. Then a directed
graph is built by connecting associated ideas to nodes in the graph.
Just free association does not seem to work very well, so it is
sugested to ask a number of question, about the main idea, or any
other node. Some of the questions are What, Where, When, Why (and the
rest of the "Journalistic" q's), can you RELATE an incident about it,
can you name its PARTS, can you describe a process to MAKE or do it.
Finally this smaller data base is reduced to a few interesting areas.
This solution is then a process of Q and A on the data base to
construct a smaller data base.
Once a small data base has been selected, it needs to be given a
linear representation. That is, it must be organized into a new data
base that is suitable to prose. There are no solutions offered for
this step.
Finally the data base is coded into English prose. There are no
solutions offered for this step.
This prose is read back in, and compared to the original data base.
Ambiguities need to be removed, some areas elaborated on, and others
rewritten in a clearer style. There are no solutions offered for this
step, but there are some rules - Things to do, and things not to do.
j'
------------------------------
Date: Tuesday, 27 September 1983 15:25:35 EDT
From: Robert.Frederking@CMU-CS-CAD
Subject: Re: NL argument between STLH and Pereira
Several comments in the last message in this exchange seemed worthy of
comment. I think my basic sympathies lie with STLH, although he
overstates his case a bit.
While language is indeed a "fuzzy thing", there are different shades
of correctness, with some sentences being completely right, some with
one obvious *error*, which is noticed by the hearer and corrected,
while others are just a mess, with the hearer guessing the right
answer. This is similar in some ways to error-correcting codes, where
after enough errors, you can't be sure anymore which interpretation is
correct. This doesn't say much about whether the underlying ideal is
best expressed by a grammar. I don't think it is, for NL, but the
reason has more to do with the fact that the categories people use in
language seem to include semantics in a rather pervasive way, so that
making a major distinction between grammatical (language-specific,
arbitrary) and other knowledge (semantics) might not be the best
approach. I could go on at length about this (in fact I'm currently
working on a Tech Report discussing this idea), but I won't, unless
pressed.
As for ignoring human cognition, some AI people do ignore it, but
others (especially here at C-MU) take it very seriously. This seems
to be a major division in the field -- between those who think the
best search path is to go for what the machine seems best suited for,
and those who want to use the human set-up as a guide. It seems to me
that the best solution is to let both groups do their thing --
eventually we'll find out which path (or maybe both) was right.
I read with interest your description of your system -- I am currently
working on a semantic chart parser that sounds fairly similar to your
brief description, except that it is written in OPS5. Thus I was
surprised at the statement that OPS5 has "no capacity for the
parallelism" needed. OPS5 users suffer from the fact that there are
some fairly non-obvious but simple ways to build powerful data
structures in it, and these have not been documented. Fortunately, a
production system primer is currently being written by a group headed
by Elaine Kant. Anyway, I have an as-yet-unaccepted paper describing
my OPS5 parser available, if anyone is interested.
As for scientific "camps" in AI, part of the reason for this seems to
be the fact that AI is a very new science, and often none of the
warring factions have proved their points. The same thing happens in
other sciences, when a new theory comes out, until it is proven or
disproven. In AI, *all* the theories are unproven, and everyone gets
quite excited. We could probably use a little more of the "both
schools of thought are probably partially correct" way of thinking,
but AI is not alone in this. We just don't have a solid base of
proven theory to anchor us (yet).
In regard to the call for a theory which explains all aspects of
language behavior, one could answer "any Turing-equivalent computer".
The real question is, how *specifically* do you get it to work? Any
claim like "my parser can easily be extended to do X" is more or less
moot, unless you've actually done it. My OPS5 parser is embedded in a
Turing-equivalent production system language. I can therefore
guarantee that if any computer can do language learning, so can my
program. The question is, how? The way linguists have often wanted
to answer "how" is to define grammars that are less than
Turing-equivalent which can do the job, which I suspect is futile when
you want to include semantics. In any event, un-implemented
extensions of current programs are probably always much harder than
they appear to be.
(As an aside about sentences as fundamental structures, there is a
two-prong answer: (1) Sentences exist in all human languages. They
appear to be the basic "frame" [I can hear nerves jarring all over the
place] or unit for human communication of packets of information. (2)
Some folks have actually tried to define grammars for dialogue
structures. I'll withhold comment.)
In short, I think warring factions aren't that bad, as long as they
all admit that no one has proven anything yet (which is definitely not
always the case), semantic chart parsing is the way to go for NL,
theories that explain all of cognitive science will be a long time in
coming, and that no one should accept a claim about AI that hasn't
been implemented.
------------------------------
End of AIList Digest
********************
∂29-Sep-83 1610 LAWS@SRI-AI.ARPA AIList Digest V1 #67
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Sep 83 16:09:36 PDT
Date: Thursday, September 29, 1983 12:56PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #67
To: AIList@SRI-AI
AIList Digest Friday, 30 Sep 1983 Volume 1 : Issue 67
Today's Topics:
Alvey Report & Fifth Generation,
AI at Edinburgh - Reply,
Machine Organisms - Desirability,
Humor - Famous Flamer's School
----------------------------------------------------------------------
Date: 23 Sep 83 13:17:41-PDT (Fri)
From: decvax!genrad!security!linus!utzoo!watmath!watdaisy!rggoebel@Ucb-Vax
Subject: Re: Alvey Report and Fifth Generation
Article-I.D.: watdaisy.298
The ``Alvey Report'' is the popular name for the following booklet:
A Programme for Advanced Information Technology
The Report of the Alvey Committee
published by the British Department of Industry, and available from
Her Majesty's Stationery Office. One London address is
49 High Holborn
London WC1V 6HB
The report is indeed interesting because it is a kind of response to
the Japanese Fifth Generation Project, but is is also interesting in
that it is not nearly so much the genesis of a new project as the
organization of existing potential for research and development. The
quickest way to explain the point is that of the proposed 352 million
pounds that the report suggests to be spent, only 42 million is for
AI (Actually it's not for AI, but for IKBS-Intelligent Knowledge Based
Systems; seniors will understand the reluctance to use the word AI after
the Lighthill report).
The areas of proposed development include 1) Software engineering,
2) Man/Machine Interfaces, 3) IKBS, and 4) VLSI. I have heard the
the most recent national budget in Britain has not committed the
funds expected for the project, but this is only rumor. I would appreciate
further information (Can you help D.H.D.W.?).
On another related topic, I think it displays a bit of AI chauvinism
to believe that anyone, including the Japanese and the British
are so naive as to put all the eggs in one basket.
Incidently, I believe Feigenbaum and McCorduck's book revealed
at least two things: a disguised plea for more funding, and a not so
disguised expose of American engineering chauvinism. Much of the American
reaction to the Japanese project sounds like the old cliches of
male chauvinism like ``...how could a women ever do the work of a real man?''
It just maybe that American Lisper's may end up ``eating quiche.'' 8-)
Randy Goebel
Logic Programming Group
University of Waterloo
UUCP: watmath!rggoebel
------------------------------
Date: Tue 27 Sep 83 22:31:28-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Re: U of Edinburgh, Scotland Inquiry
Since the Lighthill Report, a lot has changed for AI in Britain. The
Alvey Report (British Department of Industry) and the Science and
Engineering Research Council (SERC) initiative on Intelligent
Knowledge-Based Systems (IKBS) have released a lot of money for
Information Technology in general, and AI in particular (It remains to
be seen whether that huge amount of money -- 100s of millions -- is
going to be spent wisely). The Edinburgh Department of AI has managed
to get a substantial slice of that money. They have been actively
looking for people both at lecturer and research associate/fellow
level [a good opportunity for young AIers from the US to get to know
Scotland, her great people and unforgetable Highlands].
The AI Dept. have recently added 3 (4?) new people to their teaching
staff, and have more machines, research staff, and students than ever.
The main areas they work on are: Natural Language (Henry Thompson,
Mark Steedman, Graeme Ritchie), controlled deduction and problem
solving (Alan Bundy and his research assistant and students), Robotics
(Robin Popplestone, Pat Ambler and a number of others), LOGO-style
stuff (Jim Howe [head of department] and Peter Ross) and AI languages
(Robert Rae, Dave Bowen and others). There are probably others I
don't remember. The AI Dept. is both on UUCP and on a network
connected to ARPANET:
<username>%edxa%ucl-cs@isid (ARPANET)
...!vax135!edcaad!edee!edai!<username> (UUCP)
I have partial lists of user names for both connections which I will
mail directly to interested persons.
Fernando Pereira SRI AI Center [an old Edinburgh hand]
pereira@sri-ai (ARPA) ...!ucbvax!pereira@sri-ai (UUCP)
------------------------------
Date: 24 Sep 83 3:54:20-PDT (Sat)
From: hplabs!hp-pcd!orstcs!hakanson @ Ucb-Vax
Subject: Machine Organisms? - (nf)
Article-I.D.: hp-pcd.1920
I was reading a novel recently, and ran across the following passage re-
lating to "intelligent" machines, robots, etc. In case anyone is interested,
the book is Satan's World, by Poul Anderson, Doubleday 1969 (p. 132).
(I hope this article doesn't seem more appropriate to sf-lovers than to ai.)
... They had electronic speed and precision, yes, but not
full decision-making capacity. ... This is not for lack
of mystic vital forces. Rather, the biological creature
has available to him so much more physical organization.
Besides sensor-computer-effector systems comparable to
those of the machine, he has feed-in from glands, fluids,
chemistry reaching down to the molecular level -- the
integrated ultracomplexity, the entire battery of
*instincts* -- that a billion-odd years of ruthlessly
selective evolution have brought forth. He perceives and
thinks with a wholeness transcending any possible symbolism;
his purposes arise from within, and therefore are infinitely
flexible. The robot can only do what it was designed to
do. Self-programming has [can] extended these limits, to the
point where actual consciousness may occur if desired. But
they remain narrower than the limits of those who made
the machines.
Later in the book, the author describes a view that if a robot "were so
highly developed as to be equivalent to a biological organism, there
would be no point in building it." This is explained as being true
because "nature has already provided us means for making new biological
organisms, a lot cheaper and more fun than producing robots."
I won't go on with the discussion in the book, as it degenerates into the
usual debate about the theoretical, fully motivated computer that is
superior in any way..., and how such a computer would rule the world, etc.
My point in posting the above passage was to ask the experts of netland
to give their opinions of the aforementioned views.
More specifically, how do we feel about the possibilities of building
machines that are "equivalent" to intelligent biological organisms?
Or even non-intelligent ones? Is it possible? And if so, why bother?
It's probably obvious that we don't need to disagree with the views given
by the author in order to want to continue with our studies in Artificial
Intelligence. But how many of us do agree? Disagree?
Marion Hakanson {hp-pcd,teklabs}!orstcs!hakanson (Usenet)
hakanson.oregon-state@rand-relay (CSnet)
hakanson@{oregon-state,orstcs} (also CSnet)
------------------------------
Date: Wed 28 Sep 83 17:18:53-PDT
From: Peter Karp <KARP@SUMEX-AIM>
Subject: Amusement from CMU's opinion bboard
[Reprinted from the CMU opinion board via the SU-SCORE bboard.]
Ever dreamed of flaming with the Big Boys? ... Had that desire to
write an immense diatribe, berating de facto all your peers who hold
contrary opinions? ... Felt the urge to have your fingers moving
without being connected to your brain? Well, by simply sending in the
form on the back of this bboard post, you could begin climbing into
your pulpit alongside greats from all walks of life such as Chomsky,
Weizenbaum, Reagan, Von Danneken, Ellison, Abzug, Arifat and many many
more. You don't even have to leave the comfort of your armchair!
Here's how it works: Each week we send you a new lesson. You read
the notes and then simply write one essay each week on the assigned
topic. Your essays will be read by our expert pool of professional
flamers and graded on Sparsity, Style, Overtness, Incoherence, and a
host of other important aspects. You will receive a long letter from
your specially selected advisor indicating in great detail why you
obviously have the intellectual depth of a soap dish. This
apprenticeship is all there is to it.
Here are some examples of the courses offered by The School:
Classical Flames: You will study the flamers who started it
all. For example, Descarte's much quoted demonstration that reality
isn't. Special attention is paid, in this course, to the old and new
testaments and how western flaming was influenced by their structure.
(The Bible plays a particularly important role in our program and most
courses will spend at least some time tracing biblical origins or
associations of their special topic. See, particularly, the special
seminar on Space Cadetism, which concentrate on ESP and UFO
phenomena.)
Contemporary Flame Technique: Attention is paid to the detail
of flame form in this course. The student will practice the subtle
and overt ad hominem argument; fact avoidance maneuvers; "at length"
writing style; over generalization; and other important factor which
make the modern flame inaccessible to the general populace. Readings
from Russell ("Now I will admit that some unusually stupid children of
ten may find this material a bit difficult to fathom..."), Skinner,
(primarily concentrating on his Verbal Learning), Sagan (on abstract
overestimation) and many others. This course is most concerned with
politicians (sometimes, redundantly, referred to as "political
flamers") since their speech writers are particularly adept at the
technique that we wish to foster.
Appearing Brilliant (thanks to the Harvard Lampoon): Nobel
laureates lecture on topics of world import but which are very much
outside their field of expertise. There is a large representation of
Nobels in physics: the discoverer of the UnCharmed Pi Mesa Beta Quark
explains how the population explosion can be averted through proper
reculterization of mothers; and professor Nikervator, first person to
properly develop the theory of faster- than-sound "Whizon" docking
coreography, tells us how mind is the sole theological entity.
Special seminar in terminology: The name that you give
something is clearly more important than its semantics. Experts in
nomenclature demonstrate their skills. Pulitzer Prize winner Douglas
Hofstader makes up 15,000 new words whose definitions, when read
sideways prove the existence of themselves and constitute fifteen
months of columns in Scientific American. A special round table of
drug company and computer corporation representatives discuss how to
construct catchy names for new products and never give the slightest
hint to the public about what they mean.
Writing the Scientific Journal Flame: Our graduates will be
able to compete in the modern world of academic and industrial
research flaming, where the call is high for trained pontificators.
the student reads short sections from several fields and then may
select a field of concentration for detailed study.
Here is an example description of a detailed scientific flaming
seminar:
Computer Science: This very new field deals directly with the
very metal of the flamer's tools: information and communication. The
student selecting computer science will study several areas including,
but not exclusively:
Artificial Intelligence: Roger Schank explains the design of
his flame understanding and generation engine (RUSHIN) and
will explain how the techniques that it employs constitute a
complete model of mind, brain, intelligence, and quantum
electrodynamics. For contrast, Marvin Minsky does the same.
Weizenbaum tells us, with absolutely no data or alternative
model, why AI is logically impossible, and moreover,
immoral.
Programming Languages: A round table is held between Wirth,
Hoare, Dykstra, Iverson, Perlis, and Jean Samett, in order
to keep them from killing each other.
Machines and systems: Fred Brooks and Gordon Bell lead a
field of experts over the visual cliff of hardware
considerations.
The list of authoritative lectures goes on and on. In addition, an
inspiring introduction by Feigenbaum explains how important it is that
flame superiority be maintained by the United States in the face of
the recent challenges from Namibia and the Panama Canal zone.
But there's more. Not only will you read famous flamers in abundance,
but you will actually have the opportunity to "run with the pack".
The Famous Flamer's School has arranged to provide access for all
computer science track students, to the famous ARPANet where students
will be able to actually participate in discussions of earthshaking
current importance, along with the other brilliant young flamers using
this nationwide resource. You'll read and write about whether
keyboards should have a space bar across the whole bottom or split
under the thumbs; whether or not Emacs is God, and which deity is the
one true editor; whether the brain actually cools the body or not;
whether the earth revolves around the sun or vice versa -- and much
more. You contributions will be whisked across the nation, faster
than throwing a 2400 foot magtape across the room, into the minds of
thousands of other electrolusers whose brain cells will merge with
yours for the moment that they read your personal opinion of matters
of true science! What importance!
We believe that the program we've constructed is very special and will
provide, for the motivated student, an atmosphere almost completely
content free in which his or her ideas can flow in vacuity. So, take
the moment to indicate your name, address, age, and hat size by
filling out the rear of this post and mailing it to:
FAMOUS FLAMER'S SCHOOL
c/o Locker number 6E
Grand Central Station North
New York, NY.
Act now or forever hold your peace.
------------------------------
End of AIList Digest
********************
∂29-Sep-83 1910 BRODER@SU-SCORE.ARPA First AFLB talk this year
Received: from SU-SCORE by SU-AI with TCP/SMTP; 29 Sep 83 19:10:16 PDT
Date: Thu 29 Sep 83 19:10:31-PDT
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: First AFLB talk this year
To: aflb.all@SU-SCORE.ARPA
cc: sharon@SU-SCORE.ARPA
Stanford-Office: MJH 325, Tel. (415) 497-1787
AFLB is back to bussiness! Usual time, usual place, usual verbose
messages from me.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
F I R S T A F L B T A L K
10/6/83 - Prof. Jeffrey D. Ullman (Stanford):
"A time-communication tradeoff"
We examine how multiple processors could share the computation of a
collection of values whose dependencies are in the fom of a grid,
e.g., the estimation of nth derivatives. Two figures of merit are the
time t the shared computation takes and the amount of communication c,
i.e., the number of values that are either inputs or are computed by
one processor and used by another. We prove that no matter how we
share the responsibility for computing an n by n grid, the law ct =
OMEGA(n↑3) must hold.
******** Time and place: Oct. 6, 12:30 pm in MJ352 (Bldg. 460) *******
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Regular AFLB meetings are on Thursdays, at 12:30pm, in MJ352 (Bldg.
460).
If you have a topic you would like to talk about in the AFLB seminar
please tell me. (Electronic mail: broder@su-score.arpa, Office: Jacks
Hall 325, 497-1787) Contributions are wanted and welcome. Not all
time slots for the autumn quarter have been filled so far.
For more information about future AFLB meetings and topics you might
want to look at the file [SCORE]<broder>aflb.bboard .
- Andrei Broder
-------
∂29-Sep-83 2035 @SU-SCORE.ARPA:YM@SU-AI Terminals
Received: from SU-SCORE by SU-AI with TCP/SMTP; 29 Sep 83 20:35:35 PDT
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Thu 29 Sep 83 20:36:44-PDT
Date: 29 Sep 83 2034 PDT
From: Yoni Malachi <YM@SU-AI>
Subject: Terminals
To: faculty@SU-SCORE
∂29-Sep-83 1701 @SU-SCORE.ARPA:reid@Glacier Terminals
Received: from SU-SCORE by SU-AI with TCP/SMTP; 29 Sep 83 17:01:01 PDT
Received: from Glacier by SU-SCORE.ARPA with TCP; Thu 29 Sep 83 17:01:41-PDT
Date: Thursday, 29 September 1983 17:00:47-PDT
From: Brian Reid <reid@Glacier>
Subject: Terminals
To: su-bboards@Score
I will happily contribute funds enough to buy one public terminal
out of my unrestricted account, and I encourage other faculty to
follow suit.
∂30-Sep-83 0625 reid%SU-SHASTA.ARPA@SU-SCORE.ARPA number of graduating students
Received: from SU-SCORE by SU-AI with TCP/SMTP; 30 Sep 83 06:24:59 PDT
Received: from Shasta by Score with Pup; Fri 30 Sep 83 06:26:09-PDT
Date: Friday, 30 Sep 1983 06:25-PDT
To: RWF at Sail
Cc: Faculty at Score
Subject: number of graduating students
From: Brian Reid <reid@Shasta>
With respect to the comment made by Bob Floyd at tuesday's meeting,
I found in my mail archives a message from Jock Mackinlay counting past
Ph.D. graduates:
------- Forwarded Message
Mail-from: SU-NET host SAIL rcvd at 9-Mar-83 1052-PST
Date: 09 Mar 83 1046 PST
From: Jock Mackinlay <JDM@SU-AI>
Subject: Graduation statistics
To: reid@SU-SHASTA
Brian,
I went back 11 years and counted the number of PhD graduations. The
average is very close to 15 students a year. So far I have not been able
to get enrollment figures but assuming 20 new students a year that is a
75% completion which is a credit to the department.
Jock
------- End of Forwarded Message
∂30-Sep-83 1049 CLT SEMINAR IN LOGIC AND FOUNDATIONS
To: "@DIS.DIS[1,CLT]"@SU-AI
Organizational and First Meeting
Time: Wednesday, Oct. 5, 4:15-5:30 PM
Place: Mathematics Dept. Faculty Lounge, 383N Stanford
Speaker: Ian Mason
Title: Undecidability of the metatheory of the propositional calculus.
Before the talk there will be a discussion of plans for the seminar
this fall.
S. Feferman
[PS - about distribution lists -
I have added CSLI-folks@SRI-AI to my logic distribution list,
if you receive this notice twice it is probably because you were
already on the orginal distribution list. Send me a message and I
will remove the redundancy. If you read this notice on a bboard
and would like to be on the distribution list send me a message.
I you received this message as electronic mail and would like to
be deleted from the list also send me a message.
- CLT@SU-AI]
∂30-Sep-83 1646 ELYSE@SU-SCORE.ARPA Niklaus Wirth Visit on Tuesday
Received: from SU-SCORE by SU-AI with TCP/SMTP; 30 Sep 83 16:46:37 PDT
Date: Fri 30 Sep 83 16:47:34-PDT
From: Elyse Krupnick <ELYSE@SU-SCORE.ARPA>
Subject: Niklaus Wirth Visit on Tuesday
To: Faculty@SU-SCORE.ARPA
Stanford-Phone: (415) 497-9746
Klaus Wirth is speaking at the colloquium on Tuesday and some of you might be
interested in talking with him in the morning or afternoon(subject to his
availability). Let me know if you are interested and what time you would
prefer to see him.
-------
∂30-Sep-83 2146 LENAT@SU-SCORE.ARPA Attendance at Colloquium
Received: from SU-SCORE by SU-AI with TCP/SMTP; 30 Sep 83 21:46:10 PDT
Date: Fri 30 Sep 83 21:46:22-PDT
From: Doug Lenat <LENAT@SU-SCORE.ARPA>
Subject: Attendance at Colloquium
To: faculty@SU-SCORE.ARPA
Here is copy of the Bulletin Board announcement of Klaus
Wirth's talk on Tuesday afternoon (10/4/83). The attendance,
especially among faculty, has been very low at colloquiums, and
I hope that Klaus and most of the other speakers I'll get this
quarter will be sufficiently controversial and/or lively and/or
informative in their presentations that you'll begin marking it
down on your calendars as a regular event. See you Tuesday!
CS COLLOQUIUM: Niklaus Wirth will be giving the
opening colloquium of this quarter on Tuesday (Oct. 4),
at 4:15 in Terman Auditorium. His talk is titled
"Reminiscences and Reflections". Although there is
no official abstract, in discussing this talk with him
I learned that Reminiscences refer to his days here at
Stanford one generation ago, and Reflections are on
the current state of both software and hardware, including
his views on what's particularly good and bad in the
current research in each area. I am looking forward to
this talk, and invite all members of our department,
and all interested colleagues, to attend.
Professor Wirth's talk will be preceded by refreshments
served in the 3rd floor lounge (in Margaret Jacks Hall)
at 3:45. Those wishing to schedule an appointment with
Professor Wirth should contact ELYSE@SCORE.
-------
∂01-Oct-83 0822 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #31
Received: from SU-SCORE by SU-AI with TCP/SMTP; 1 Oct 83 08:22:39 PDT
Date: Friday, September 30, 1983 8:53PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #31
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Saturday, 2 Oct 1983 Volume 1 : Issue 31
Today's Topics:
Announcement - COLING 84,
Puzzle - Truthteller Solution
----------------------------------------------------------------------
Date: Thu 29 Sep 83 22:05:48-PDT
From: Don Walker <WALKER@SRI-AI.ARPA>
Subject: COLING 84 Call For Papers
Call For Papers
COLING 84, Tenth International Conference On Computational Linguistics
COLING 84 is scheduled for 2-6 July 1984 at Stanford University,
Stanford, California. It will also constitute the 22nd
Annual Meeting of the Association for Computational Linguistics,
which will host the conference.
Papers for the meeting are solicited on linguistically and
computationally significant topics, including but not limited to
the following:
o Machine translation and machine-aided translation.
o Computational applications in syntax, semantics, anaphora,
and discourse.
o Knowledge representation.
o Speech analysis, synthesis, recognition, and understanding.
o Phonological and morpho-syntactic analysis.
o Algorithms.
o Computational models of linguistic theories.
o Parsing and generation.
o Lexicology and lexicography.
Authors wishing to present a paper should submit five copies of a
summary not more than eight double-spaced pages long, by 9
January 1984 to:
Prof. Yorick Wilks,
Languages and Linguistics,
University of Essex,
Colchester, Essex,
CO4 3SQ, ENGLAND
Phone: 44-(206)862 286;
Telex 98440 ( UNILIB G )
It is important that the summary contain sufficient information,
including references to relevant literature, to convey the
new ideas and allow the program committee to determine the scope
of the work. Authors should clearly indicate to what extent
the work is complete and, if relevant, to what extent it has
been implemented. A summary exceeding eight double-spaced
pages in length may not receive the attention it deserves.
Authors will be notified of the acceptance of their papers by
2 April 1984.
Full length versions of accepted papers should be sent by
14 May 1984 to:
Dr. Donald Walker
COLING 84
SRI International
Menlo Park
California, 94025, USA
Phone: 1-(415)859-3071
ARPAnet: Walker@SRI-AI
Other requests for information should be addressed to:
Dr. Martin Kay
XEROX PARC
3333 Coyote Hill Road
Palo Alto, California 94304, USA
Phone: 1-(415)494-4428
ARPAnet: Kay@PARC
------------------------------
Date: Thu, 29 Sep 83 20:21 EDT
From: Chris Moss <Moss.UPenn@Rand-Relay>
Subject: Solution to Truthteller Puzzle
The solution published in the Digest of 21 Sep from K. Handa
works, but it seems like an Algol program recoded in Prolog.
Here's a solution which uses no asserts or retracts, and it
runs about 100 times faster ( extrapolated timings for 10
digits in Unh Prolog as I only did 8 digits for Handa's
program ).
/* Find a number of n digits of which the first is the
number of 0's in the number, the second the number
of 1's etc. */
go(N) :- guess(N, N, 0, 0, [], L), print(L), nl, fail.
go(N) :- statistics.
guess(1, Total, S, SS, L, T.L) :- !, T is Total-S,
count(0, T.L, T).
guess(Val, Total, S, SS, L, List) :- V1 is Val-1,
n(A),
SS2 is V1*A+SS, SS2 =< Total,
S2 is S+A,
guess(V1, Total, S2, SS2, A.L, List),
count(V1, List, A).
count(N, [], 0).
count(N, N.A, B) :- !, B>0, C is B-1, count(N,A,C).
count(N, A.B, C) :- count(N,B,C).
n(0). n(1). n(2). n(3). n(4). n(5). n(6). n(7). n(8). n(9).
------------------------------
End of PROLOG Digest
********************
∂01-Oct-83 1801 GOLUB@SU-SCORE.ARPA reception
Received: from SU-SCORE by SU-AI with TCP/SMTP; 1 Oct 83 18:01:48 PDT
Date: Sat 1 Oct 83 18:02:42-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: reception
To: faculty@SU-SCORE.ARPA
I hope it is understood that the Faculty is free to bring their wives,
friends or guests to the reception at my house. It's helpful if you would
let me know if you can come. GENE
-------
∂01-Oct-83 1804 GOLUB@SU-SCORE.ARPA Meeting
Received: from SU-SCORE by SU-AI with TCP/SMTP; 1 Oct 83 18:04:00 PDT
Date: Sat 1 Oct 83 18:05:14-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Meeting
To: tenured-faculty@SU-SCORE.ARPA
Please remember our first meeting takes place on Tuesday at 2:30.
We have a number of issues to discuss. GENE
-------
∂01-Oct-83 1808 GOLUB@SU-SCORE.ARPA Dinner for Wirth
Received: from SU-SCORE by SU-AI with TCP/SMTP; 1 Oct 83 18:07:54 PDT
Date: Sat 1 Oct 83 18:08:49-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Dinner for Wirth
To: Faculty@SU-SCORE.ARPA
The dinner for Wirth will be on Tuesday at MacCarthur Park. There will
be drinks at my house at 6:30 and we will meet at the restaurant at
7:30. If you haven't responded to my first message, it is still possible
for you to make a commitment for Tuesday night. GENE
-------
∂03-Oct-83 1104 LAWS@SRI-AI.ARPA AIList Digest V1 #68
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Oct 83 11:03:40 PDT
Date: Monday, October 3, 1983 9:33AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #68
To: AIList@SRI-AI
AIList Digest Monday, 3 Oct 1983 Volume 1 : Issue 68
Today's Topics:
Humor - Famous Flamer's School Credit,
Technology Transfer & Research Ownership,
AI Reports - IRD & NASA,
TV Coverage - Computer Chronicles,
Seminars - Ullman, Karp, Wirth, Mason,
Conferences - UTexas Symposium & IFIP Workshop
----------------------------------------------------------------------
Date: Mon 3 Oct 83 09:29:16-PDT From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Famous Flamer's School -- Credit
The Famous Flamer's School was created by Jeff.Shrager@CMU-CS-A; my
apologies for not crediting him in the original article. If you
saved or distributed a copy, please add a note crediting Jeff.
-- Ken Laws
------------------------------
Date: Thu 29 Sep 83 17:58:29-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Alas, I must flame...
[ I hate to flame, but here's an issue that really got to me...]
From the call for papers for the "Artificial Intelligence and Machines":
AUTHORS PLEASE NOTE: A Public Release/Sensitivity Approval is necessary.
Authors from DOD, DOD contractors, and individuals whose work is government
funded must have their papers reviewed for public release and more
importantly sensitivity (i.e. an operations security review for sensitive
unclassified material) by the security office of their sponsoring agency.
How much AI work does *NOT* fall under one of the categories "Authors from
DOD, DOD contractors, and individuals whose work is government funded" ?
I read this to mean that essentially any government involvement with
research now leaves one open to goverment "protection".
At issue here is not the goverment duty to safeguard classified materials;
it is the intent of the government to limit distribution of non-military
basic research (alias "sensitive unclassified material"). This "we paid for
it, it's OURS (and the Russians can't have it)" mentality seems the rule now.
But isn't science supposed to be for the benefit of all mankind,
and not just another economic bargaining chip? I cannot help but to
be chilled by this divorce of science from a higher moral outlook.
Does it sound old fashioned to believe that scientific thought is
part of a common heritage, to be used to improve the lives of all? A
far as I can see, if all countries in the world follow the lead of
the US and USSR toward scientific protectionism, we scientists will
have allowed science to abandon its primary role toward learning
about ourselves and become a mere intellectual commodity.
David Rogers
DRogers@SUMEX-AIM.ARPA
------------------------------
Date: Fri 30 Sep 83 10:09:08-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: IRD Report
[Reprinted from IEEE Computer, Sep. 1983, p. 116.]
Rapid Growth Predicted for AI-Based System
Expert systems are now moving out of the research laboratory and into
the commercial marketplace, according to "Artificial Intelligence,"
a 167-page research report from International Resource Development.
Revenue from all AI hardware, software, and services will amount to
only $70 million this year but is expected to reach $8 billion
in the next 10 years.
Biomedical applications promise to be among the fastest growing
uses of AI, reducing the time and cost of diagnosing illnesses and
adding to the accuracy of diagnoses. AI-based systems can range
from "electronic encyclopedias," which physicians can use as
reference sources, to full-fledged "electronic consultants"
capable of taking a patient through an extensive series of diagnostic
tests and determining the patient's ailments with great precision.
"Two immediate results of better diagnostic procedures may be a
reduction in the number of unnecessary surgical procedures performed
on patients and a decrease in the average number of expensive tests
performed on patients," predicts Dave Ledecky of the IRD research
staff. He also notes that the AI technology may leave hospitals
half-empty, since some operations turn out to be unnecessary.
However, he expects no such dramatic result anytime soon, since
widespread medical application of AI technology isn't expected for
about five years.
The IRD report also describes the activities of several new companies
that are applying AI technology to medical systems. Helena Laboratories
in Beaumont, Texas, is shipping a densitometer/analyzer, which
includes a serum protein diagnostic program developed by Rutgers
University using AI technology. Still in the development stage
are the AI-based products of IntelliGenetics in Palo Alto,
California, which are based on work conducted at Stanford University
over the last 15 years.
Some larger, more established companies are also investing in AI
research and development. IBM is reported to have more than five
separate programs underway, while Schlumberger, Ltd., is
spending more than $5 million per year on AI research, much of
which is centered on the use of AI in oil exploration.
AI software may dominate the future computer industry, according to
the report, with an increasing percentage of applications
programming being performed in Lisp or other AI-based "natural"
languages.
Further details on the $1650 report are available from IRD,
30 High Street, Norwalk, CT 06851; (800) 243-5008,
Telex: 64 3452.
------------------------------
Date: Fri 30 Sep 83 10:16:43-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: NASA Report
[Reprinted from IEEE Spectrum, Oct. 1983, p. 78]
Overview Explains AI
A technical memorandum from the National Aeronautics and
Space Administration offers an overview of the core ingredients
of artificial intelligence. The volume is the first in a series
that is intended to cover both artificial intelligence and
robotics for interested engineers and managers.
The initial volume gives definitions and a short history entitled
"The rise, fall, and rebirth of AI" and then lists applications,
principal participants in current AI work, examples of the
state of the art, and future directions. Future volumes in AI
will cover application areas in more depth and will also cover
basic topics such as search-oriented problem-solving and
planning, knowledge representation, and computational logic.
The report is available from the National Technical Information
Service, Springfield, Va. 22161. Please ask for NASA Technical
Memorandum Number 85836.
------------------------------
Date: Thu 29 Sep 83 20:13:09-PDT
From: Ellie Engelmore <EENGELMORE@SUMEX-AIM>
Subject: TV documentary
[Reprinted from the SU-SCORE bboard.]
KCSM-TV Channel 60 is producing a series entitled "The Computer
Chronicles". This is a series of 30-minute programs intended to be a
serious look at the world of computers, a potential college-level
teaching device, and a genuine historical document. The first episode
in the series (with Don Parker discussing computer security) will be
broadcast this evening...Thursday, September 29...9pm.
The second portion of the series, to be broadcast 9 pm Thursday,
October 6, will be on the subject of Artificial Intelligence (with Ed
Feigenbaum).
------------------------------
Date: Thu 29 Sep 83 19:03:27-PDT
From: Andrei Broder <Broder@SU-SCORE.ARPA@SU-Score>
Subject: AFLB
[Reprinted from the SU-SCORE bboard.]
The "Algorithms for Lunch Bunch" (AFLB) is a weekly seminar in
analysis of algorithms held by the Stanford Computer Science
Department, every Thursday, at 12:30 p.m., in Margaret Jacks Hall, rm.
352.
At the first meeting this year, (Thursday, October 6) Prof. Jeffrey D.
Ullman, from Stanford, will talk on "A time-communication tradeoff"
Abstract follows.
Further information about the AFLB schedule is in the file
[SCORE]<broder>aflb.bboard .
If you want to get abstracts of the future talks, please send me a
message to put you on the AFLB mailing list. If you just want to know
the title of the next talk and the name of the speaker look at the
weekly Stanford CSD schedule that is (or should be) sent to every
bboard.
------------------------
10/6/83 - Prof. Jeffrey D. Ullman (Stanford):
"A time-communication tradeoff"
We examine how multiple processors could share the computation of a
collection of values whose dependencies are in the fom of a grid,
e.g., the estimation of nth derivatives. Two figures of merit are the
time t the shared computation takes and the amount of communication c,
i.e., the number of values that are either inputs or are computed by
one processor and used by another. We prove that no matter how we
share the responsibility for computing an n by n grid, the law ct =
OMEGA(n↑3) must hold.
******** Time and place: Oct. 6, 12:30 pm in MJ352 (Bldg. 460) *******
------------------------------
Date: Thu 29 Sep 83 09:33:24-CDT
From: CS.GLORIA@UTEXAS-20.ARPA
Subject: Karp Colloquium, Oct. 13, 1983
[Reprinted from the UTexas-20 bboard.]
Richard M. Karp, University of California at Berkeley, will present a talk
entitled, "A Fast Parallel Algorithm for the Maximal Independent Set Problem"
on Thursday, October 13, 1983 at 3:30 p.m. in Painter Hall 4.42. Coffee
at 3 p.m. in PAI 3.24.
Abstract:
One approach to understanding the limits of parallel computation is to
search for problems for which the best parallel algorithm is not much faster
than the best sequential algorithm. We survey what is known about this
phenomenon and show that--contrary to a popular conjecture--the problem of
finding a maximal inependent set of vertices in a graph is highly amenable
to speed-up through parallel computation. We close by suggesting some new
candidates for non-parallelizable problems.
------------------------------
Date: Fri 30 Sep 83 21:39:45-PDT
From: Doug Lenat <LENAT@SU-SCORE.ARPA>
Subject: N. Wirth, Colloquium 10/4/83
[Reprinted from the SU-SCORE bboard.]
CS COLLOQUIUM: Niklaus Wirth will be giving the
opening colloquium of this quarter on Tuesday (Oct. 4),
at 4:15 in Terman Auditorium. His talk is titled
"Reminiscences and Reflections". Although there is
no official abstract, in discussing this talk with him
I learned that Reminiscences refer to his days here at
Stanford one generation ago, and Reflections are on
the current state of both software and hardware, including
his views on what's particularly good and bad in the
current research in each area. I am looking forward to
this talk, and invite all members of our department,
and all interested colleagues, to attend.
Professor Wirth's talk will be preceded by refreshments
served in the 3rd floor lounge (in Margaret Jacks Hall)
at 3:45. Those wishing to schedule an appointment with
Professor Wirth should contact ELYSE@SCORE.
------------------------------
Date: 30 Sep 83 1049 PDT
From: Carolyn Talcott <CLT@SU-AI>
Subject: SEMINAR IN LOGIC AND FOUNDATIONS
[Reprinted from the SU-SCORE bboard.]
Organizational and First Meeting
Time: Wednesday, Oct. 5, 4:15-5:30 PM
Place: Mathematics Dept. Faculty Lounge, 383N Stanford
Speaker: Ian Mason
Title: Undecidability of the metatheory of the propositional calculus.
Before the talk there will be a discussion of plans for the seminar
this fall.
S. Feferman
[PS - If you read this notice on a bboard and would like to be on the
distribution list send me a message. - CLT@SU-AI]
------------------------------
Date: Thu 29 Sep 83 14:24:36-CDT
From: Clive Dawson <CC.Clive@UTEXAS-20.ARPA>
Subject: Schedule for C.S. Dept. Centennial Symposium
[Reprinted from the UTexas-20 bboard.]
COMPUTING AND THE INFORMATION AGE
October 20 & 21, 1983
Joe C. Thompson Conference Center
Thursday, Oct. 20
-----------------
8:30 Welcoming address - A. G. Dale (UT Austin)
G. J. Fonken, VP for Acad. Affairs and Research
9:00 Justin Rattner (Intel)
"Directions in VLSI Architecture and Technology"
10:00 J. C. Browne (UT Austin)
10:15 Coffee Break
10:45 Mischa Schwartz (Columbia)
"Computer Communications Networks: Past, Present and Future"
11:45 Simon S. Lam (UT Austin)
12:00 Lunch
2:00 Herb Schwetman (Purdue)
"Computer Performance: Evaluation, Improvement, and Prediction"
3:00 K. Mani Chandy (UT Austin)
3:15 Coffee Break
3:45 William Wulf (Tartan Labs)
"The Evolution of Programming Languages"
4:45 Don Good (UT Austin)
Friday, October 21
------------------
8:30 Raj Reddy (CMU)
"Supercomputers for AI"
9:30 Woody Bledsoe (UT Austin)
9:45 Coffee Break
10:15 John McCarthy (Stanford)
"Some Expert Systems Require Common Sense"
11:15 Robert S. Boyer and J Strother Moore (UT Austin)
11:30 Lunch
1:30 Jeff Ullman (Stanford)
"A Brief History of Achievements in Theoretical Computer Science"
2:30 James Bitner (UT Austin)
2:45 Coffee Break
3:15 Cleve Moler (U. of New Mexico)
"Mathematical Software -- The First of the Computer Sciences"
4:15 Alan Cline (UT Austin)
4:30 Summary - K. Mani Chandy, Chairman, Dept. of Computer Sciences
------------------------------
Date: Sunday, 2 October 1983 17:49:13 EDT
From: Mario.Barbacci@CMU-CS-SPICE
Subject: Call For Participation -- IFIP Workshop
CALL FOR PARTICIPATION
IFIP Workshop on Hardware Supported Implementation of
Concurrent Languages in Distributed Systems
March 26-28, 1984, Bristol, U.K.
TOPICS:
- the impact of distributed computing languages and compilers on the
architecture of distributed systems.
- operating systems; centralized/decentralized control, process
communications and synchronization, security
- hardware design and interconnections
- hardware/software interrelation and trade offs
- modelling, measurements, and performance
Participation is by INVITATION ONLY, if you are interested in attending this
workshop write to the workshop chairman and include an abstract (1000 words
approx.) of your proposed contribution.
Deadline for Abstracts: November 15, 1983
Workshop Chairman: Professor G.L. Reijns
Chairman, IFIP Working Group 10.3
Delft University of Technology
P.O. Box 5031
2600 GA Delft
The Netherlands
------------------------------
End of AIList Digest
********************
∂03-Oct-83 1255 LAWS@SRI-AI.ARPA AIList Digest V1 #69
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Oct 83 12:51:38 PDT
Date: Monday, October 3, 1983 9:50AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #69
To: AIList@SRI-AI
AIList Digest Monday, 3 Oct 1983 Volume 1 : Issue 69
Today's Topics:
Rational Psychology - Examples,
Organization - Reflexive Reasoning & Conciousness & Learning & Parallelism
----------------------------------------------------------------------
Date: Thu, 29 Sep 83 18:29:39 EDT
From: "John B. Black" <Black@YALE.ARPA>
Subject: "Rational Psychology"
Recently on this list, Pereira held up as a model for us all, Doyle's
"Rational Psychology" article in AI Magazine. Actually, I think what Pereira
is really requesting is a reduction of overblown claims and assertions with no
justification (e.g., "solutions" to the natural language problem). However,
since he raised the "rational psychology" issue I though I would comment on it.
I too read Doyle's article with interest (although it seemed essentially
the same as Don Norman's numerous calls for a theoretical psychology in the
early 1970s), but (like the editor of this list) I was wondering what the
referents were of the vague descriptions of "rational psychology." However,
Doyle does give some examples of what he means: mathematical logic and
decision theory, mathematical linguistics, and mathematical theories of
perception. Unfortunately, this list is rather disappointing because --
with the exception of the mathematical theories of perception -- they have
all proved to be misleading when actually applied to people's behavior.
Having a theoretical (or "rational" -- terrible name with all the wrong
connotations) psychology is certainly desirable, but it does have to make some
contact with the field it is a theory of. One of the problems here is that
the "calculus" of psychology has yet to be invented, so we don't have the tools
we need for the "Newtonian mechanics" of psychology. The latest mathematical
candidate was catastrophe theory, but it turned out to be a catastrophe when
applied to human behavior. Perhaps Periera and Doyle have a "calculus"
to offer.
Lacking such a appropriate mathematics, however, does not stop a
theoretical psycholology from existing. In fact, I offer three recent examples
of what a theoretical psychology ought to be doing at this time:
Tversky, A. Features of similarity. PSYCHOLOGICAL REVIEW, 1977, 327-352.
Schank, R.C. DYNAMIC MEMORY. Cambridge University Press, 1982.
Anderson, J.R. THE ARCHITECTURE OF COGNITION. Harvard University Press, 1983.
------------------------------
Date: Thu 29 Sep 83 19:03:40-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Self-description, multiple levels, etc.
For a brilliant if tentative attack of the questions noted by
Prem Devanbu, see Brian Smith's thesis "Reflection and Semantics
in a Procedural Language,", MIT/LCS/TR-272.
Fernando Pereira
------------------------------
Date: 27 Sep 83 22:25:33-PDT (Tue)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: reflexive reasoning ? - (nf)
Article-I.D.: uiucdcs.3004
I believe the pursuit of "consciousness" to be complicated by the difficulty
of defining what we mean by it (to state the obvious). I prefer to think in
less "spiritual" terms, say starting with the ability of the human memory to
retain impressions for varying periods of time. For example, students cramming
for an exam can remember long lists of things for a couple of hours -- just
long enough -- and forget them by the end of the same day. Some thoughts are
almost instantaneously lost, others last a lifetime.
Here's my suggestion: let's start thinking in terms of self-observation, i.e.
the construction of models to explain the traces that are left behind by things
we have already thought (and felt?). These models will be models of what goes
on in the thought processes, can be incorrect and incomplete (like any other
model), and even reflexive (the thoughts dedicated to this analysis leave
their own traces, and are therefore subject to modelling, creating notions
of self-awareness).
To give a concrete (if standard) example: it's quite reasonable for someone
to say to us, "I didn't know that." Or again, "Oh, I just said it, what was
his name again ... How can I be so forgetful!"
This leads us into an interesting "problem": the fading of human memory with
time. I would not be surprized if this was actually desirable, and had to be
emulated by computer. After all, if you're going to retain all those traces
of where a thought process has gone; traces of the analysis of those traces,
etc; then memory would fill up very quickly.
I have been thinking in this direction for some time now, and am working on
a programming language which operates on several of the principles stated
above. At present the language is capable of responding dynamically to any
changes in problem state produced by other parts of the program, and rules
can even respond to changes induced by themselves. Well, that's the start;
the process of model construction seems to me to be by far the harder part
of the task.
It becomes especially interesting when you think about modelling what look
like "levels" of self-awareness, but could actually be manifestations of just
one mechanism: traces of some work, which are analyzed, thus leaving traces
of self-analysis; which are analyzed ... How are we to decide that the traces
being analyzed are somehow different from the traces of the analysis? Even
"self-awareness" (as opposed to full-blown "consciousness") will be difficult
to understand. However, at this point I am convinced that we are not dealing
with a potential for infinite regress, but with a fairly simple mechanism
whose results are hard to interpret. If I am right, we may have some thinking
to do about subject-object distinctions.
In case you're interested in my programming language, look for some papers due
to appear shortly:
Logic-Programming Production Systems with METALOG. Software Practice
and Experience, to appear shortly.
METALOG: a Language for Knowledge Representation and Manipulation.
Conf on AI (April '83).
Of course, I don't say that I'm thinking about "self-awareness" as a long-term
goal (my co-author isn't) ! If/when such a goal becomes acceptable to the AI
community it will probably be called something else. Doesn't "reflexive
reasoning" sound more scientific?.
Marcel Schoppers,
Dept of Comp Sci,
U of Illinois @ Urbana-Champaign
uiucdcs!marcel
------------------------------
Date: 27 Sep 83 19:24:19-PDT (Tue)
From: decvax!genrad!security!linus!philabs!cmcl2!floyd!vax135!ariel!ho
u5f!hou5e!hou5d!mat@Ucb-Vax
Subject: Re: the Halting problem.
Article-I.D.: hou5d.674
I may be naive, but it seems to me that any attempt to produce a system that
will exhibit conciousness-;like behaviour will require emotions and the
underlying base that they need and supply. Reasoning did not evolve
independently of emotions; human reason does not, in my opinion, exist
independently of them.
Any comments? I don't recall seeing this topic discussed. Has it been? If
not, is it about time to kick it around?
Mark Terribile
hou5d!mat
------------------------------
Date: 28 Sep 83 12:44:39-PDT (Wed)
From: ihnp4!drux3!drufl!samir @ Ucb-Vax
Subject: Re: the Halting problem.
Article-I.D.: drufl.674
I agree with mark. An interesting book to read regarding conciousness is
"The origin of conciousness in the breakdown of bicamaral mind" by
Julian Jaynes. Although I may not agree fully with his thesis, it did
get me thinking and questioning about the usual ideas regarding
conciousness.
An analogy regarding conciousness, "emotions are like the roots of a
plant, while conciousness is the fruit".
Samir Shah
AT&T Information Systems, Denver.
drufl!samir
------------------------------
Date: 30 Sep 83 13:42:32 EDT
From: BIESEL@RUTGERS.ARPA
Subject: Recursion of reperesentations.
Some of the more recent messages have questioned the possibility of
producing programs which can "understand" and "create" human discourse,
because this kind of "understanding" seems to be based upon an infinite
kind of recursion. Stated very simply, the question is "how can the human
mind understand itself, given that it is finite in capacity?", which
implies that humans cannot create a machine equivalent of a human mind,
since (one assumes) that underatnding is required before construction
becomes possible.
There are two rather simple objections to this notion:
1) Humans create minds every day, without understanding
anything about it. Just some automatic biochemical
machinery, some time, and exposure to other minds
does the trick for human infants.
2) John von Neumann, and more recently E.F. Codd
demostrated in a very general way the existence
of universal constructors in cellular automata.
These are configurations in cellular space which
able to construct any configuration, including
copies of themselves, in finite time (for finite
configurations)
No infinite recursion is involved in either case, nor is "full"
understanding required.
I suspect that at some point in the game we will have learned enough about
what works (in a primarily empirical sense) to produce machine intelligence.
In the process we will no doubt learn a lot about mind in general, and our
own minds in particular, but we will still not have a complete understanding
of either.
Peolpe will continue to produce AI programs; they will gradually get better
at various tasks; others will combine various approaches and/or programs to
create systems that play chess and can talk about the geography of South
America; occasionally someone will come up with an insight and a better way
to solve a sub-problem ("subjunctive reference shift in frame-demon
instantiation shown to be optimal for linearization of semantic analysis
of noun phrases" IJCAI 1993); lay persons will come to take machine intelligence
for granted; AI people will keep searching for a better definition of
intelligence; nobody will really believe that machines have that indefinable
something (call it soul, or whatever) that is essential for a "real" mind.
Pete Biesel@Rutgers.arpa
------------------------------
Date: 29 Sep 83 14:14:29 EDT
From: SOO@RUTGERS.ARPA
Subject: Top-Down? Bottom-Up?
[Reprinted from the Rutgers bboard.]
I happen to read a paper by Michael A. Arbib about brain theory.
The first section of which is "Brain Theory: 'Bottom-up' and
'Top-Down'" which I think will shed some light on our issue of
top-down and bottom-up approach in machine learning seminar.
I would like to quote several remarks from the brain theorist
view point to share with those interesed:
" I want to suggest that brain theory should confront the 'bottom-up'
analyses of neural modellling no only with biological control theory but
also with the 'top-down' analyses of artificial intelligence and congnitive
psychology. In bottom-up analyses, we take components of known function, and
explore ways of putting them together to synthesize more and more complex
systems. In top-down analyses, we start from some complex functional behavior
that interests us, and try to determine what are natural subsystems into which
we can decompose a system that performs in the specified way. I would argue
that progress in brain theory will depend on the cyclic interaction of these
two methodologies. ..."
" The top-down approach complement bottom-up studies, for one cannot simply
wait until one knows all the neurons are and how they are connected to then
simulate the complete system. ..."
I believe that the similar philosophy applies to the machine learning study
too.
For those interested, the paper can be found in COINS techical report 81-31
by M. A. Arbib "A View of Brain Theory"
Von-Wun,
------------------------------
Date: Fri, 30 Sep 83 14:45:55 PDT
From: Rik Verstraete <rik@UCLA-CS>
Subject: Parallelism and Physiology
I would like to comment on your message that was printed in AIList Digest
V1#63, and I hope you don't mind if I send a copy to the discussion list
"self-organization" as well.
Date: 23 Sep 1983 0043-PDT
From: FC01@USC-ECL
Subject: Parallelism
I thought I might point out that virtually no machine built in the
last 20 years is actually lacking in parallelism. In reality, just as
the brain has many neurons firing at any given time, computers have
many transistors switching at any given time. Just as the cerebellum
is able to maintain balance without the higher brain functions in the
cerebrum explicitly controlling the IO, most current computers have IO
controllers capable of handling IO while the CPU does other things.
The issue here is granularity, as discussed in general terms by E. Harth
("On the Spontaneous Emergence of Neuronal Schemata," pp. 286-294 in
"Competition and Cooperation in Neural Nets," S. Amari and M.A. Arbib
(eds), Springer-Verlag, 1982, Lecture Notes in Biomathematics # 45). I
certainly recommend his paper. I quote:
One distinguishing characteristic of the nervous system is
thus the virtually continuous range of scales of tightly
intermeshed mechanisms reaching from the macroscopic to the
molecular level and beyond. There are no meaningless gaps
of just matter.
I think Harth has a point, and applying his ideas to the issue of parallel
versus sequential clarifies some aspects.
The human brain seems to be parallel at ALL levels. Not only is a large
number of neurons firing at the same time, but also groups of neurons,
groups of groups of neurons, etc. are active in parallel at any time. The
whole neural network is a totally parallel structure, at all levels.
You pointed out (correctly) that in modern electronic computers a large
number of gates are "working" in parallel on a tiny piece of the problem,
and that also I/O and CPU run in parallel (some systems even have more than
one CPU). However, the CPU itself is a finite state machine, meaning it
operates as a time-sequence of small steps. This level is inherently
sequential. It therefore looks like there's a discontinuity between the
gate level and the CPU/IO level.
I would even extend this idea to machine learning, although I'm largely
speculating now. I have the impression that brains not only WORK in
parallel at all levels of granularity, but also LEARN in that way. Some
computers have implemented a form of learning, but it is almost exclusively
at a very high level (most current AI on learning work is at this level),
or only at a very low level (cf. Perceptron). A spectrum of adaptation is
needed.
Maybe the distinction between the words learning and self-organization is
only a matter of granularity too. (??)
Just as people have faster short term memory than long term memory but
less of it, computers have faster short term memory than long term
memory and use less of it. These are all results of cost/benefit
tradeoffs for each implementation, just as I presume our brains and
bodies are.
I'm sure most people will agree that brains do not have separate memory
neurons and processing neurons or modules (or even groups of neurons).
Memory and processing is completely integrated in a human brain.
Certainly, there are not physically two types of memories, LTM and STM.
The concept of LTM/STM is only a paradigm (no doubt a very useful one), but
when it comes to implementing the concept, there is a large discrepancy
between brains and machines.
Don't be so fast to think that real computer designers are
ignorant of physiology.
Indeed, a lot of people I know in Computer Science do have some idea of
physiology. (I am a CS major with some background in neurophysiology.)
Furthermore, much of the early CS emerged from neurophysiology, and was an
explicit attempt to build artificial brains (at a hardware/gate level).
However, although "real computer designers" may not be ignorant of
physiology, it doesn't mean that they actually manage to implement all the
concepts they know. We still have a long way to go before we have
artificial brains...
The trend towards parallelism now is more like
the human social system of having a company work on a problem. Many
brains, each talking to each other when they have questions or
results, each working on different aspects of a problem. Some people
have breakdowns, but the organization keeps going. Eventually it comes
up with a product, although it may not really solve the problem posed
at the beginning, it may have solved a related problem or found a
better problem to solve.
Again, working in parallel at this level doesn't mean everything is
parallel.
Another copyrighted excerpt from my not yet finished book on
computer engineering modified for the network bboards, I am ever
yours,
Fred
All comments welcome.
Rik Verstraete <rik@UCLA-CS>
PS: It may sound like I am convinced that parallelism is the only way to
go. Parallelism is indeed very important, but still, I believe sequential
processing plays an important role too, even in brains. But that's a
different issue...
------------------------------
End of AIList Digest
********************
∂03-Oct-83 1550 GOLUB@SU-SCORE.ARPA meeting
Received: from SU-SCORE by SU-AI with TCP/SMTP; 3 Oct 83 15:49:46 PDT
Date: Mon 3 Oct 83 15:49:02-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: meeting
To: tenured-faculty@SU-SCORE.ARPA
Our tenured faculty meeting will take place on Tuesday at 2:30 in
Room 252. There are several important issues to discuss so please be
on time. Gene
-------
∂03-Oct-83 1558 GOLUB@SU-SCORE.ARPA lunch
Received: from SU-SCORE by SU-AI with TCP/SMTP; 3 Oct 83 15:58:02 PDT
Date: Mon 3 Oct 83 15:54:54-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: lunch
To: faculty@SU-SCORE.ARPA
The first lunch of the term will take place on Tuesday at 12:15.
Klaus Wirth will be visiting. GENE
-------
∂03-Oct-83 1636 larson@Shasta Implications of accepting DOD funding
Received: from SU-SHASTA by SU-AI with PUP; 03-Oct-83 16:36 PDT
Date: Mon, 3 Oct 83 16:37 PDT
From: John Larson <larson@Shasta>
Subject: Implications of accepting DOD funding
To: funding@sail
From AIList Digest V1 #68
------------------------------
Date: Thu 29 Sep 83 17:58:29-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Alas, I must flame...
[ I hate to flame, but here's an issue that really got to me...]
From the call for papers for the "Artificial Intelligence and Machines":
AUTHORS PLEASE NOTE: A Public Release/Sensitivity Approval is necessary.
Authors from DOD, DOD contractors, and individuals whose work is government
funded must have their papers reviewed for public release and more
importantly sensitivity (i.e. an operations security review for sensitive
unclassified material) by the security office of their sponsoring agency.
How much AI work does *NOT* fall under one of the categories "Authors from
DOD, DOD contractors, and individuals whose work is government funded" ?
I read this to mean that essentially any government involvement with
research now leaves one open to goverment "protection".
At issue here is not the goverment duty to safeguard classified materials;
it is the intent of the government to limit distribution of non-military
basic research (alias "sensitive unclassified material"). This "we paid for
it, it's OURS (and the Russians can't have it)" mentality seems the rule now.
But isn't science supposed to be for the benefit of all mankind,
and not just another economic bargaining chip? I cannot help but to
be chilled by this divorce of science from a higher moral outlook.
Does it sound old fashioned to believe that scientific thought is
part of a common heritage, to be used to improve the lives of all? A
far as I can see, if all countries in the world follow the lead of
the US and USSR toward scientific protectionism, we scientists will
have allowed science to abandon its primary role toward learning
about ourselves and become a mere intellectual commodity.
David Rogers
DRogers@SUMEX-AIM.ARPA
------------------------------
∂03-Oct-83 1907 LAWS@SRI-AI.ARPA AIList Digest V1 #70
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Oct 83 19:06:31 PDT
Date: Monday, October 3, 1983 5:38PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #70
To: AIList@SRI-AI
AIList Digest Tuesday, 4 Oct 1983 Volume 1 : Issue 70
Today's Topics:
Technology Transfer & Research Ownership - Clarification,
AI at Edinburgh - Description
----------------------------------------------------------------------
Date: Mon 3 Oct 83 11:55:41-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: recent flame
I would like to clarify my recent comments on the disclaimer published
with the conference announcement for the "Intelligent Systems and Machines"
conference to be given at Oakland University. I did not mean to suggest
that the organizers of this particular conference are the targets of my
criticism; indeed, I congratulate them for informing potential attendees
of their obligations under the law. I sincerely apologize for not making
this obvious in my original note.
I also realize that most conferences will have to deal with this issue
in the future, and meant my message not as a "call to action", but rather,
as a "call to discussion" of the proper role of goverment in AI and science
in general. I believe that we should follow these rules, but should
also participate in informed discussion of their long-range effect and
direction.
Apologies and regards,
David Rogers
DRogers@SUMEX-AIM.ARPA
------------------------------
Date: Friday, 30-Sep-83 14:17:58-BST
From: BUNDY HPS (on ERCC DEC-10) <bundy@edxa>
Reply-to: bundy@rutgers.arpa
Subject: Does Edinburgh AI exist?
A while back someone in your digest asked whether the AI
dept at Edinburgh still exists. The short answer is yes it flourishes.
The long answer is contained in the departmental description that follows.
Alan Bundy
------------------------------
Date: Friday, 30-Sep-83 14:20:00-BST
From: BUNDY HPS (on ERCC DEC-10) <bundy@edxa>
Reply-to: bundy@rutgers.arpa
Subject: Edinburgh AI Dept - A Description
THE DEPARTMENT OF ARTIFICIAL INTELLIGENCE AT EDINBURGH UNIVERSITY
Artificial Intelligence was recognised as a separate discipline by Edinburgh
University in 1966. The Department in its present form was created in 1974.
During its existence it has steadily built up a programme of undergraduate and
post-graduate teaching and engaged in a vigorous research programme. As the
only Department of Artificial Intelligence in any university, and as an
organisation which has made a major contribution to the development of the
subject, it is poised to play a unique role in the advance of Information
Technology which is seen to be a national necessity.
The Department collaborates closely with other departments within the
University in two distinct groupings. Departments concerned with Cognitive
Science, namely A.I., Linguistics, Philosophy and Psychology all participate
in the School of Epistemics, which dates from the early 70's. A new
development is an active involvement with Computer Science and Electrical
Engineering. The 3 departments form the basis of the School of Information
Technology. A joint MSc in Information Technology began in 1983.
A.I. are involved in collaborative activities with other institutions
which are significant in that they involve the transfer of people,
ideas and software. In particular this involves MIT (robotics),
Stanford (natural language), Carnegie-Mellon (the PERQ machine) and
Grenoble (robotics).
Relationships with industry are progressing. As well as a number of
development contracts, A.I. have recently had a teaching post funded by the
software house Systems Designers Ltd. There, however, is a natural limit to
the extent to which a University Department can provide a service to industry:
consequently a proposal to create an Artificial Intelligence Applications
Institute has been put forward and is at an advanced stage of planning. This
will operate as a revenue earning laboratory, performing a technology transfer
function on the model of organisations like the Stanford Research Institute or
Bolt Beranek and Newman.
Research in A.I.
A.I. is a new subject so that there is a very close relationship between
teaching at all levels, and research. Artificial Intelligence is about making
machines behave in ways which exhibit some of the characteristics of
intelligence, and about how to integrate such capabilities into larger
coherent systems. The vehicle for such studies has been the digital computer,
chosen for its flexibility.
A.I. Languages and Systems.
The development of high level programming languages has been crucial to all
aspects of computing because of the consequent easing of the task of
communicating with these machines. Artificial Intelligence has given birth to
a distinctive series of languages which satisfy different design constraints
to those developed by Computer Scientists whose primary concern has been to
develop languages in which to write reliable and efficient programming systems
to perform standard computing tasks. Languages developed in the Artificial
Intelligence field have been intended to allow people readily to try out ideas
about how a particular cognitive process can be mechanised. Consequently they
have provided symbolic computation as well as numeric, and have allowed
program code and data to be equally manipulable. They are also highly
interactive, and often integrated with a sophisticated text editor, so that
the iteration time for trying out a new idea can be rapid.
Edinburgh has made a substantial contribution to A.I. programming languages
(with significant cross fertilisation to the Computer Science world) and will
continue to do so. POP-2 was designed and developed in the A.I. Department
by Popplestone and Burstall. The development of Prolog has been more complex.
Kowalski first formulated the crucial idea of predicate logic as a programming
language during his period in the A.I. Department. Prolog itself was designed
and first implemented in Marseille, as a result of Kowalski's interaction with
a research group there. This was followed by a re-implementation at
Edinburgh, which demonstrated its potential as a practical tool.
To date the A.I. Department have supplied implementations of A.I. languages
to over 200 laboratories around the world, and are involved in an active
programme of Prolog systems development.
The current development in languages is being undertaken by a group supported
by the SERC, led by Robert Rae, and supervised by Dr Howe. The concern of the
group is to provide language support for A.I. research nationwide, and to
develop A.I. software for a single user machine, the ICL PERQ. The major goal
of this project is to provide the superior symbolic programming capability of
Prolog, in a user environment of the quality to be found in modern personal
computers with improved interactive capabilities.
Mathematical Reasoning.
If Artificial Intelligence is about mechanising reasoning, it has a close
relationship with logic which is about formalising mathematical reasoning, and
with the work of those philosophers who are concerned with formalising
every-day reasoning. The development of Mathematical Logic during the 20th
century has provided a part of the theoretical basis for A.I. Logic provides a
rigorous specification of what may in principle be deduced - it says little
about what may usefully be deduced. And while it may superficially appear
straightforward to render ordinary language into logic, on closer examination
it can be seen to be anything but easy.
Nevertheless, logic has played a central role in the development of A.I. in
Edinburgh and elsewhere. An early attempt to provide some control over the
direction of deduction was the resolution principle, which introduced a sort
of matching procedure called unification between parts of the axioms and parts
of a theorem to be proved. While this principle was inadequate as a means of
guiding a machine in the proof of significant theorems, it survives in Prolog
whose equivalent of procedure call is a restricted form of resolution.
A.I. practioners still regard the automation of mathematical reasoning to
be a crucial area in A.I., but have moved from earlier attempts to find uniform
procedures for an efficient search of the space of possible deductions to the
creation of systems which embody expert knowledge about specific domains. For
example if such a system is trying to solve a (non linear) equation, it may
adopt a strategy of using the axioms of algebra to bring two instances of the
unknown closer together with the "intention" of getting them to coalesce.
Work in mathematical reasoning is under the direction of Dr Bundy.
Robotics.
The Department has always had a lively interest in robotics, in particular in
the use of robots for assembly. This includes the use of vision and force
sensing, and the design of languages for programming assembly robots. Because
of the potential usefulness of fast moving robots, the Department has
undertaken a study of their dynamics behaviour, design and control. The work
of the robot group is directed by Mr Popplestone.
A robot command language RAPT is under development: this is intended to make
it easy for non-computer experts to program an assembly robot. The idea is
that the assembly task should be programmed in terms of the job that is to be
done and how the objects are to be fitted together, rather than in terms of
how the manipulator should be moved. This SERC funded work is steered by a
Robot Language Working Party which consists of industrialists and academics;
the recently formed Tripartite Study Group on Robot Languages extends the
interest to France and Germany.
An intelligent robot needs to have an internal representation of its world
which is sufficiently accurate to allow it to predict the results of planned
actions. This means that, among other things, it needs a good representation
of the shapes of bodies. While conventional shape modelling techniques permit
a hypothetical world to be represented in a computer they are not ideal for
robot applications, and the aim at Edinburgh is to combine techniques of shape
modelling with techniques used in A.I. so that the advantages of both may be
used. This will include the ability to deal effectively with uncertainty.
Recently, in collaboration with GEC, the robotics group have begun to consider
how the techniques of spatial inference which have been developed can be
extended into the area of mechanical design, based on the observation that the
essence of any design is the relationship between part features, rather than
the specific quantitative details. A proposal is being persued for a
demonstrator project to produce a small scale, but highly integrated "Design
and Make" system on these lines.
Work on robot dynamics, also funded by the SERC, has resulted in the
development of highly efficient algorithms for simulating standard serial
robots, and in a novel representation of spatial quantities, which greatly
simplifies the mathematics.
Vision and Remote Sensing.
The interpretation of data derived from sensors depends on expectations about
the structure of the world which may be of a general nature, for example that
continuous surfaces occupy much of the scene, or specific. In manufacture the
prior expectations will be highly specific: one will know what objects are
likely to be present and how they are likely to be related to each other. One
vision project in the A.I. Department is taking advantage of this in
integrating vision with the RAPT development in robotics - the prior
expectations are expressed by defining body geometry in RAPT, and by defining
the expected inter-body relationships in the same medium.
A robot operating in a natural environment will have much less specific
expectations, and the A.I. Department collaborate with the Heriot Watt
University to study the sonar based control of a submersible. This involves
building a world representation by integrating stable echo patterns, which are
interpreted as objects.
Natural Language.
A group working in the Department of A.I. and related departments in the School
of Epistemics is studying the development of computational models of language
production, the process whereby communicative intent is transformed into
speech. The most difficult problems to be faced when pursuing this goal cover
the fundamental issues of computation: structure and process. In the domain
of linguistic modelling, these are the questions of representation of
linguistic and real world knowledge, and the understanding of the planning
process which underlies speaking.
Many sorts of knowledge are employed in speaking - linguistic knowledge of how
words sound, of how to order the parts of a sentence to communicate who did
what to whom, of the meaning of words and phrases, and common sense knowledge
of the world. Representing all of these is prerequisite to using them in a
model of language production.
On the other hand, planning provides the basis for approaching the issue of
organizing and controlling the production process, for the mind seems to
produce utterances as the synthetic, simultaneous resolution of numerous
partially conflicting goals - communicative goals, social goals, purely
linguistic goals - all variously determined and related.
The potential for dramatic change in the study of human language which is made
possible by this injection of dynamic concerns into what has heretofore been
an essentially static enterprise is vast, and the A.I. Department sees its
work as attempting to realise some of that potential. The study of natural
language processing in the department is under the direction of Dr Thompson.
Planning Systems.
General purpose planning systems for automatically producing plans of action
for execution by robots have been a long standing theme of A.I. research. The
A.I. Department at Edinburgh had a very active programme of planning research
in the mid 1970s and was one of the leading international centres in this
area. The Edinburgh planners were applied to the generation of project plans
for large industrial activities (such as electricity turbine overhaul
procedures). These planners have continued to provide an important source of
ideas for later research and development in the field. A prototype planner in
use at NASA's Jet Propulsion Laboratory which can schedule the activities of a
Voyager-type planetary probe is based on Edinburgh work.
New work on planning has recently begun in the Department and is mainly
concerned with the interrelationships between planning, plan execution and
monitoring. The commercial exploitation of the techniques is also being
discussed. The Department's planning work is under the direction of Dr Tate.
Knowledge Based and Expert Systems.
Much of the A.I. Department's work uses techniques often referred to as
Intelligent Knowledge Based Systems (IKBS) - this includes robotics, natural
language, planning and other activities. However, researchers in the
Department of A.I. are also directly concerned with the creation of Expert
Systems in Ecological Modelling, User Aids for Operating Systems, Sonar Data
Interpretation, etc.
Computers in Education.
The Department has pioneered in this country an approach to the use of
computers in schools in which children can engage in an active and creative
interaction with the computer without needing to acquire abstract concepts and
manipulative skills for which they are not yet ready. The vehicle for this
work has been the LOGO language, which has a simple syntax making few demands
on the typing skills of children. While LOGO is in fact equivalent to a
substantial subset of LISP, a child can get moving with a very small subset of
the language, and one which makes the actions of the computer immediately
concrete in the form of the movements of a "turtle" which can either be
steered around a VDU or in the form of a small mobile robot.
This approach has a significant value in Special Education. For example in
one study an autistic boy found he was able to communicate with a "turtle",
which apparently acted as a metaphor for communicating with people, resulting
in his being able to use language spontaneously for the first time. In
another study involving mildly mentally and physically handicapped youngsters
a touch screen device invoked procedures for manipulating pictorial materials
designed to teach word attack skills to non-readers. More recent projects
include a diagnostic spelling program for dyslexic children, and a suite of
programs which deaf children can use to manipulate text to improve their
ability to use language expressively. Much of the Department's Computers in
Education work is under the direction Dr Howe.
Teaching in the Department of A.I.
The Department is involved in an active teaching programme at undergraduate
and postgraduate level. At undergraduate level, there are A.I. first, second
and third year courses. There is a joint honours degree with the Department
of Linguistics. A large number of students are registered with the Department
for postgraduate degrees. An MSc/PhD in Cognitive Science is provided in
collaboration with the departments of Linguistics, Philosophy and Psychology
under the aegis of the School of Epistemics. The Department contributes two
modules on this: Symbolic Computation and Computational Linguistics. This
course has been accepted as a SERC supported conversion course. In October
1983 a new MSc programme in IT started. This is a joint activity with the
Departments of Computer Science and Electrical Engineering. It has a large
IKBS content which is supported by SERC.
Computing Facilities in the Department of A.I.
Computing requirements of researchers are being met largely through the
SERC DEC-10 situated at the Edinburgh Regional Computing Centre or residually
through use of UGC facilities. Undergraduate computing for A.I. courses is
supported by the EMAS facilities at ERCC. Postgraduate computing on courses
is mainly provided through a VAX 11/750 Berkeley 4.1BSD UNIX system within the
Department. Several groups in the Department use the ICL PERQ single user
machine. A growth in the use of this and other single user machines is
envisaged over the next few years. The provision of shared resources to these
systems in a way which allows for this growth in an orderly fashion is a
problem the Department wishes to solve.
It is anticipated that several further multi-user computers will soon be
installed - one at each site of the Department - to act as the hub of future
computing provision for the research pursued in Artificial Intelligence.
------------------------------
End of AIList Digest
********************
∂05-Oct-83 1327 BRODER@SU-SCORE.ARPA First AFLB talk this year
Received: from SU-SCORE by SU-AI with TCP/SMTP; 5 Oct 83 13:27:03 PDT
Date: Wed 5 Oct 83 13:26:00-PDT
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: First AFLB talk this year
To: aflb.all@SU-SCORE.ARPA
Stanford-Office: MJH 325, Tel. (415) 497-1787
This is to remind you that tomorrow is the
F I R S T A F L B T A L K
10/6/83 - Prof. Jeffrey D. Ullman (Stanford):
"A time-communication tradeoff"
Before the lecture we'll discuss some organizational aspects of AFLB,
BATS, etc.
See you there!! (MJH352, 12:30 p.m.)
- Andrei
-------
∂05-Oct-83 1353 GOLUB@SU-SCORE.ARPA Attendance at Tenured Faculty Meetings
Received: from SU-SCORE by SU-AI with TCP/SMTP; 5 Oct 83 13:53:28 PDT
Date: Wed 5 Oct 83 13:54:18-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Attendance at Tenured Faculty Meetings
To: Regular-Tenured-Faculty: ;
Please let me know how you feel about the attendance of Binford and Wiederhold
at our tenured faculty meetings. Binford is a research professor,term 5 years
and Wiederhold , an associate research professor, term 5 years.
Do you think they should attend the meetings? Should they have a vote?
The same issue arises for Herriot. Should he attend? His vote will not be
counted in the Dean's office.
GENE
-------
∂05-Oct-83 1717 GOLUB@SU-SCORE.ARPA committee assignments
Received: from SU-SCORE by SU-AI with TCP/SMTP; 5 Oct 83 16:37:54 PDT
Date: Wed 5 Oct 83 16:34:07-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: committee assignments
To: faculty@SU-SCORE.ARPA
cc: abbis@SU-SCORE.ARPA, tajnai@SU-SCORE.ARPA
At the request of some of the faculty and staff, I have made several
changes in committee assignments. I hope this is agreeable to all.
GENE
Manna to Masters off Space
Oliger to Space ( chairman)
Lenat to Masters
McCluskey to Forum off Comprehensive
-------
∂05-Oct-83 1717 GOLUB@SU-SCORE.ARPA Course proliferation
Received: from SU-SCORE by SU-AI with TCP/SMTP; 5 Oct 83 14:10:30 PDT
Date: Wed 5 Oct 83 14:10:08-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Course proliferation
To: faculty@SU-SCORE.ARPA
cc: PatasHNIK@SU-SCORE.ARPA, malaCHI@SU-SCORE.ARPA
There has been an increasing number of courses offered during the last
several years. These courses are often quite interesting and appropriate
but they affect our whole curriculum structure AND impact our financial
situation gravely.
The Department has a policy that no new course may be offered without
the approval of the curriculum commitee. ( Bob Floyd is currently head
of that committee.) I reserve the right of final approval.
In a similiar way, we need to be careful in our assignment of TA's.
We are under-budgeted by the Dean's office and we need to conserve our
resources as much as possible. Therefore it is not possible to assign
TA's in an arbitrary manner. The regulations are
1/4 TA (=10 hours ) for 20-30 students
1/2 TA (=20 hours ) for over 30 students.
A grader is appointed as a Course Assistant.
Please try to conform to these rules.
GENE
-------
∂06-Oct-83 0025 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #32
Received: from SU-SCORE by SU-AI with TCP/SMTP; 6 Oct 83 00:24:50 PDT
Date: Wednesday, October 5, 1983 5:59PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #32
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Thursday, 6 Oct 1983 Volume 1 : Issue 32
Today's Topics:
Publications - Reports From Argonne,
Representation - Defining Functor/3
----------------------------------------------------------------------
Date: 27-Sep-83 16:08:22-CDT (Tue)
From: Gabriel@ANL-MCS (John Gabriel)
Subject: Argonne Reports On Work With Prolog
Copies of our reports ANL-83-70, and ANL-MCSD-TM-11 have been
sent to SU-Score for filing in the <Prolog> directory. Both
contain appendices listing experimental Prolog programs, both
are in the public domain.
ANL-83-70 deals with the automated diagnosis of faults in physical
plant. The actual problem solved is a "toy" piece of combinational
digital logic, but we have a proposal in to work on a sequential
relay logic system of about 1000 relays managing interlocks on
a real plant. I am not sure at this stage if we will be funded,
but I hope we will be and that we will be able to put the
proofs of principle, if not the production system in the public
domain.
Use of Prolog based systems in real plant, particularly very
expensive real plant having expected life in the fifty year
range for major components raises interesting questions about
software upgrades to track plant upgrades. In addition
requirements to certify reliability of systems important to
plant safety are severe by ordinary standards.
In the long term this may force us towards development efforts
on such things as Prolog compilers generating code running on
triply redundant processors, simply because that may be the
only way we establish traceability of decisions in software
development, together with required hardware reliability.
But those are some way away, and we expect to use the EDCAAD
CProlog for proofs of principle, and publish this work as
required by the EDCAAD licensing agreement.
ANL-MCSD-TM-11 deals with the following problem:- Suppose a
physical plant (or a software program) consists of components
(processes) linked by connections (data flows). Some aspects
of quality assurance for software or hardware "important to
safety" require certifications that the only paths into some
subsystem are those listed by the manufacturer.
The problem may be simply stated in graph theory; " Given a set
of arcs and two subgraphs of a given graph, prove the set of
arcs if removed separate the graph into a pair of disjoint
subgraphs, and that the two given subgraphs are one in each
of the subgraphs separated by removal of the given arcs".
If this theorem is denied, find the additional arcs to be
cut in order to make it true.
[ These reports can obtained through FTP as:
{SU-Score}PS:<Prolog>ANL-83-70.Txt
ANL-MCS-TM11.Txt
ANL-MCS-TM11.Figures -ed ]
------------------------------
Date: 4-Oct-83 14:18:39-CDT (Tue)
From: Pieper@ANL-MCS (Gail Pieper)
Subject: Notes About Obtaining Paper Copies of ANL Documents
Should anyone wish a hard copy of the ANL documents we send to
SU-Score, he may send a request via ARPA mail to:
Pieper@ANL-MCS
or write
Dr. Gail W. Pieper
Mathematics and Computer Science Division
Argonne National Laboratory
9700 South Cass Avenue
Argonne, Illinois 60439
------------------------------
Date: Tuesday, 4-Oct-83 00:09:09-BST
From: Richard HPS (on ERCC DEC-10) <OKeefe.R.A.@EDXA>
Subject: Defining functor/3
I recently had the opportunity of looking at a Prolog system
which had been developed from the Clocksin & Mellish textbook.
The trouble is that the Clocksin & Mellish book was written to
help beginners learn how to use three existing Prolog interpreters
( the DEC-10, PDP-11, and EMAS Prolog systems ), and not to help
Prolog implementers produce exactly the same language. So they
didn't discuss fine points.
An obvious example of this is the fact that in the DEC-10, EMAS,
& C-Prolog systems, disjunction is transparent to cut just like
conjunction. What I mean by this is that in
p :- a, !, b; c.
p :- d.
if 'a' is true but 'b' is false, 'p' will fail, because the cut
cut right back to p, and didn't stop at the semicolon. If you
are an implementor, you will regret this, because otherwise you
could write
(A ; B) :- call(A).
(A ; B) :- call(B).
But if you are NOT an implementor, there is no reason for you
to expect disjunction to be any different from conjunction in
this respect. The transparency of semicolon falls quite
naturally out of the way the DEC-10 compiler and interpreter work.
C-Prolog ( based on EMAS Prolog ) gets it right, but it is much
harder. Because PDP-11 Prolog Sacrificed Semicolon ( and a lot
of other things ) to keep its size down, and because disjunction
is not central to Prolog, the Clocksin & Mellish book doesn't
mention this "detail". ( Of course, if Prolog didn't have cuts,
the problem wouldn't arise... )
A less obvious question is what functor, arg, and univ should do.
The point of this message is that there is a clear design principle
which can settle the question, regardless of what any particular
implementation happens to do. { I haven't tried all possible
combinations on the DEC-10. If it doesn't act in accord with this
principle, I shall regard DEC-10 Prolog as wrong. }
The principle is this: as far as possible, every system predicate
should behave AS IF it were defined by an explicit table. The
interpreter or whatever should only produce an error message when
it is unable to simulate the table.
Here is a simple example. The predicate succ/2 is "notionally"
defined by the infinite table
succ(0, 1).
}i succ(1, 2).
...
succ(56432, 56433).
...
We can use this + the principle to specify exactly how succ(X,Y)
should behave in all circumstances:
X instantiated, but not a non-negative integer => fail
Y instantiated, but not a positive integer => fail
X unbound, Y > 0 => bind X to Y-1
Y unbound, X >= 0 => bind Y to X+1
X and Y both unbound => apologise and abort
A more than acceptable alternative in the last case would be to
enumerate all possible solutions by backtracking, but this is
not always possible. The point that I am making is that
succ(foo,Y) is not in error, it is simply false. There are only
two "error" messages that succ is entitled to produce:
sorry, succ(<maxint>,←) overflows
sorry, succ(←,←) needs at least one argument instantiated
and in both cases the error is the interpreter's, not the
programmer's. Of course, it is not particularly sensible to call
succ(a,b), but it is perfectly well defined. It is probably a
good idea to have a type-checking option which will print a warning
message if either argument of succ is bound to a non-integer, but
it MUST not abort the computation or cause an exception, it must
just fail. Type-checking is better done at compile-time ( see
files <Prolog>TYPECH.PL and <Prolog>PROLOG.TYP at SU-SCORE ).
Now we come to functor/3. The only thing that the principle
doesn't tell us is what is to be done with integers (or floating
point numbers, or data base references, or whatever). The DEC-10
and C-Prolog rule is
functor(X, X, 0) :- atomic(X).
so that for example functor(9, 9, 0) is true. This convention
greatly simplifies term hacking. So we imagine a table
functor(0, 0, 0).
functor(654987, 654987, 0).
functor('', '', 0).
functor(asdflkjhfdsa, asdflkjhfdsa, 0).
functor(A+B, +, 2).
...
Since functor(Term, never←before, 1297) may ( in implementation
terms) create a new functor never before seen, it is clear that
this table cannot be confined to the functors that appear in the
program. There is another predicate for that, called current←functor,
which depends on the current state of the program. Taking this
together with the principle tells us what functor(Term,F,N)
must do:
Term atomic => unify F=Term, N=0
Term compound => unify F=function symbol, N=arity
Term unbound, F unbound => apologise and abort
Term unbound, F compound => fail
Term unbound, F atomic but not an atom => unify N=0, Term=F
Term unbound, N bound but not a non-negative integer => fail
Term unbound, F atom, N unbound => apologise and abort
Term unbound, F atom, N non-negative integer =>
unify Term=F(←1,...,←N)
Without some sort of principle available to judge by, it would be
difficult to decide what functor(X, 9, N) should do. With this
principle, it is easy to see that there is exactly one solution,
and we can cheaply find it, so it should succeed and bind X=9, N=0.
A similar analysis can be applied to arg/3 and =../2. For example,
Term =.. [f,a|←] is "apologise and abort", while 1 =.. [1] is true.
If we have a system which follows the principle, it follows that
any call on one of these evaluable predicates will have one of
three outcomes:
- the goal is true in the table, and succeeds corretly
- the goal is false in the table, and fails correctly
- there is insufficient information for the interpreter
to tell
In the last case, it would not be correct to fail. If the
interpreter has enough information to tell that the goal is
false, it should fail with no further ado. So there might
be true instances of the goal. But the interpreter lacks
the information to find such an instance ( or the patience ),
so it would not be correct to fail either. That is why
I suggest "apologise and abort". DEC-10 compiled code does
not abide by this rule. If there is an uninstantiated variable
in an arithmetic expression, for example, it calmly substitutes
0 ( but doesn't bind the variable ) and continues, though it
does warn you. "abort" need not be taken too literally: ideally
the interpreter should enter a debugging mode, at the very least
you should be able to inspect the goal stack.
There are lots of things in Prolog that probably won't be in DeuLog
( ho deuteros logos ), but I think functor/arg/univ/succ/plus and
so on probably will be. They make sense even in a parallel
system, and are much closer to logic than say I/O is. So
it is worth taking some trouble over their definitions.
------------------------------
End of PROLOG Digest
********************
∂06-Oct-83 1525 LAWS@SRI-AI.ARPA AIList Digest V1 #71
Received: from SRI-AI by SU-AI with TCP/SMTP; 6 Oct 83 15:25:33 PDT
Date: Thursday, October 6, 1983 9:55AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #71
To: AIList@SRI-AI
AIList Digest Thursday, 6 Oct 1983 Volume 1 : Issue 71
Today's Topics:
Humor - The Lightbulb Issue in AI,
Reports - Edinburgh AI Memos,
Rational Psychology,
Halting Problem,
Artificial Organisms,
Technology Transfer,
Seminar - NL Database Updates
----------------------------------------------------------------------
Date: 6 Oct 83 0053 EDT (Thursday)
From: Jeff.Shrager@CMU-CS-A
Subject: The lightbulb issue in AI.
How many AI people does it take to change a lightbulb?
At least 55:
The problem space group (5):
One to define the goal state.
One to define the operators.
One to describe the universal problem solver.
One to hack the production system.
One to indicate about how it is a model of human lightbulb
changing behavior.
The logical formalism group (16):
One to figure out how to describe lightbulb changing in
first order logic.
One to figure out how to describe lightbulb changing in
second order logic.
One to show the adequecy of FOL.
One to show the inadequecy of FOL.
One to show show that lightbulb logic is non-monotonic.
One to show that it isn't non-monotonic.
One to show how non-monotonic logic is incorporated in FOL.
One to determine the bindings for the variables.
One to show the completeness of the solution.
One to show the consistency of the solution.
One to show that the two just above are incoherent.
One to hack a theorm prover for lightbulb resolution.
One to suggest a parallel theory of lightbulb logic theorm
proving.
One to show that the parallel theory isn't complete.
...ad infinitum (or absurdum as you will)...
One to indicate how it is a description of human lightbulb
changing behavior.
One to call the electrician.
The robotics group (10):
One to build a vision system to recognize the dead bulb.
One to build a vision system to locate a new bulb.
One to figure out how to grasp the lightbulb without breaking it.
One to figure out how to make a universal joint that will permit
the hand to rotate 360+ degrees.
One to figure out how to make the universal joint go the other way.
One to figure out the arm solutions that will get the arm to the
socket.
One to organize the construction teams.
One to hack the planning system.
One to get Westinghouse to sponsor the research.
One to indicate about how the robot mimics human motor behavior
in lightbulb changing.
The knowledge engineering group (6):
One to study electricians' changing lightbulbs.
One to arrange for the purchase of the lisp machines.
One to assure the customer that this is a hard problem and
that great accomplishments in theory will come from his support
of this effort. (The same one can arrange for the fleecing.)
One to study related research.
One to indicate about how it is a description of human lightbulb
changing behavior.
One to call the lisp hackers.
The Lisp hackers (13):
One to bring up the chaos net.
One to adjust the microcode to properly reflect the group's
political beliefs.
One to fix the compiler.
One to make incompatible changes to the primitives.
One to provide the Coke.
One to rehack the Lisp editor/debugger.
One to rehack the window package.
Another to fix the compiler.
One to convert code to the non-upward compatible Lisp dialect.
Another to rehack the window package properly.
One to flame on BUG-LISPM.
Another to fix the microcode.
One to write the fifteen lines of code required to change the
lightbulb.
The Psychological group (5):
One to build an apparatus which will time lightbulb
changing performance.
One to gather and run subjects.
One to mathematically model the behavior.
One to call the expert systems group.
One to adjust the resulting system so that it drops the
right number of bulbs.
[My apologies to groups I may have neglected. Pages to code before
I sleep.]
------------------------------
Date: Saturday, 1-Oct-83 15:13:42-BST
From: BUNDY HPS (on ERCC DEC-10) <bundy@edxa>
Reply-to: bundy@rutgers.arpa
Subject: Edinburgh AI Memos
If you want to receive a regular abstracts list and order form
for Edinburgh AI technical reports then write (steam mail I'm afraid)
to Margaret Pithie, Department of Artificial Intelligence, Forrest
Hill, Edinburgh, Scotland. Give your name and address and ask to be put
on the mailing list for abstracts.
Alan Bundy
------------------------------
Date: 29 Sep 83 22:49:18-PDT (Thu)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: Rational Psychology - (nf)
Article-I.D.: uiucdcs.3046
The book mentioned, Metaphors We Live By, was written by George Lakoff
and Mark Johnson. It contains some excellent ideas and is written in a
style that makes for fast, enjoyable reading.
--Rick Dinitz
uicsl!dinitz
------------------------------
Date: 28 Sep 83 10:32:35-PDT (Wed)
From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: Rational Psychology [and Reply]
I must say its been exciting listening to the analysis of what "Rational
Psychology" might mean or should not mean. Should I go read the actual
article that started it all? Perish the thought. Is psychology rational?
Someone said that all sciences are rational, a moot point, but not that
relevant unless one wishes to consider Psychology a science. I do not.
This does not mean that psychologists are in any way inferior to chemists
or to REAL scientists like those who study physics. But I do think there
is a difference IN KIND between these fields and psychology. Very few of
us have any close intimate relationships with carbon compounds or inter-
stellar gas clouds. (At least not since the waning of the LSD era.) But
with psychology, anyone NOT in this catagory has no business in the field.
(I presume we are talking Human psychology.)
The way this difference might exert itself is quite hard to predict, tho
in my brief foray into psychology it was not so hard to spot. The great
danger is a highly amplified form of anthropomorphism which leads one to
form technical opinions quite possibly unrelated to technical or theoretical
analysis. In physics, there is a superficially similar process in which
the scientist develops a theory which seems to be a "pet theory" and then
sets about trying to show it true or false. The difference is that the
physicist developed his pet theory from technical origins rather than from
personal experience. There is no other origin for his ideas unless you
speculate that people have a inborn understanding of psi-mesons or spin
orbitals. Such theories MUST have developed from these ideas. In
psychology, the theory may well have been developed from a big scary dog
when the psychologist was two. THAT is a difference in kind, and I think
that is why I will always be suspicious of psychologists.
----GaryFostel----
[I think that is precisely the point of the call for rational psychology.
It is an attempt to provide a solid theoretical underpinning based on
the nature of mind, intelligence, emotions, etc., without regard to
carbon-based implementations or the necessity of explaining human psychoses.
As such, rational psychology is clearly an appropriate subject for
AIList and net.ai. Traditional psychology, and subjective attacks or
defenses of it, are less appropriate for this forum. -- KIL]
------------------------------
Date: 2 Oct 83 1:42:26-PDT (Sun)
From: ihnp4!ihuxv!portegys @ Ucb-Vax
Subject: Re: the Halting problem
Article-I.D.: ihuxv.565
I think that the answer to the halting problem in intelligent
entities is that there must exist a mechanism for telling it
whether its efforts are getting it anywhere, i.e. something that
senses its internal state and says if things are getting better,
worse, or whatever. Normally for humans, if a "loop" were to
begin, it should soon be broken by concerns like "I'm hungry
now, let's eat". No amount of cogitation makes that feeling
go away.
I would rather call this mechanism need than emotion, since I
think that some emotions are learned.
So then, needs supply two uses to intelligence: (1) they supply
a direction for the learning which is a necessary part of
intelligence, and (2) they keep the intelligence from getting
bogged down in fruitless cogitation.
Tom Portegys
Bell Labs, IH
ihuxv!portegys
------------------------------
Date: 3 Oct 83 20:22:47 EDT (Mon)
From: Speaker-To-Animals <speaker%umcp-cs@UDel-Relay>
Subject: Re: Artificial Organisms
Why would we want to create machines equivelent to people when
organisms already have a means to reproduce themselves?
Because then we might be able to make them SMARTER than humans
of course! We might also learn something about ourselves along
the way too.
- Speaker
------------------------------
Date: 30 Sep 83 1:16:31-PDT (Fri)
From: decvax!genrad!mit-eddie!barmar @ Ucb-Vax
Subject: November F&SF
Article-I.D.: mit-eddi.774
Some of you may be interested in reading Isaac Asimov's article in the
latest (November, I think) Magazine of Fantasy and Science Fiction. The
article is entitled "More Thinking about Thinking", and is the Good
Doctor's views on artificial intelligence. He makes a very good case
for the idea that non-human thinking (i.e. in computers and
dolphins) is likely to be very different, and perhaps superior to, human
thinking. He uses an effective analogy to locomotion: artificial
locomotion, namely the wheel, is completely unlike anything found in
nature.
--
Barry Margolin
ARPA: barmar@MIT-Multics
UUCP: ..!genrad!mit-eddie!barmar
------------------------------
Date: Mon, 3 Oct 83 23:17:18 EDT
From: Brint Cooper (CTAB) <abc@brl-bmd>
Subject: Re: Alas, I must flame...
I don't believe, as you assert, that the motive for clearing
papers produced under DOD sponsorship is 'econnomic' but, alas,
military. You then may justly argue the merits of non-export
of things militarily important vs the benefuits which acaccrue
to all of us by a free and open exchange.
I'm not taking sides--yet., but am trying to see the issue
clearly defined.
Brint
------------------------------
Date: Tue, 4 Oct 83 8:16:20 EDT
From: Earl Weaver (VLD/VMB) <earl@brl-vat>
Subject: Flame on DoD
No matter what David Rogers @ sumex-aim thinks, the DoD "review" of all papers
before publishing is not to keep information private, but to make sure no
classified stuff gets out where it shouldn't be and to identify any areas
of personal opinion or thinking that could be construed to be official DoD
policy or position. I think it will have very little effect on actually
restricting information.
As with most research organizations, the DoD researchers are not immune to the
powers of the bean counters and must publish.
------------------------------
Date: Mon 3 Oct 83 16:44:24-PDT
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: Ph.D. oral
[Reprinted from the SU-SCORE bboard.]
Computer Science Department
Ph.D. Oral, Jim Davidson
October 18, 1983 at 2:30 p.m.
Rm. 303, Building 200
Interpreting Natural Language Database Updates
Although the problems of querying databases in natural language are well
understood, the performance of database updates via natural language introduces
additional difficulties. This talk discusses the problems encountered in
interpreting natural language updates, and describes an implemented system that
performs simple updates.
The difficulties associated with natural language updates result from the fact
that the user will naturally phrase requests with respect to his conception of
the domain, which may be a considerable simplification of the actual underlying
database structure. Updates that are meaningful and unambiguous from the
user's standpoint may not translate into reasonable changes to the underlying
database.
The PIQUE system (Program for Interpretation of Queries and Updates in English)
operates by maintaining a simple model of the user, and interpreting update
requests with respect to that model. For a given request, a limited set of
"candidate updates"--alternative ways of fulfilling the request--are
considered, and ranked according to a set of domain-independent heuristics that
reflect general properties of "reasonable" updates. The leading candidate may
be performed, or the highest ranking alternatives presented to the user for
selection. The resultant action may also include a warning to the user about
unanticipated side effects, or an explanation for the failure to fulfill a
request.
This talk describes the PIQUE system in detail, presents examples of its
operation, and discusses the effectiveness of the system with respect to
coverage, accuracy, efficiency, and portability. The range of behaviors
required for natural language update systems in general is discussed, and
implications of updates on the design of data models are briefly considered.
------------------------------
End of AIList Digest
********************
∂06-Oct-83 2023 LENAT@SU-SCORE.ARPA Fuzzy Lunch
Received: from SU-SCORE by SU-AI with TCP/SMTP; 6 Oct 83 20:22:16 PDT
Date: Thu 6 Oct 83 20:21:30-PDT
From: Doug Lenat <LENAT@SU-SCORE.ARPA>
Subject: Fuzzy Lunch
To: faculty@SU-SCORE.ARPA
Lotfi Zadeh will be joining us for lunch on Tuesday. He'll be
giving the colloquium later that day, on Reasoning with
Commonsense Knowledge; its abstract is on BBOARD.
Hope you can join us for both of those times.
Doug
-------
∂07-Oct-83 2127 REGES@SU-SCORE.ARPA Use of CSD machines for coursework
Received: from SU-SCORE by SU-AI with TCP/SMTP; 7 Oct 83 21:27:00 PDT
Date: Fri 7 Oct 83 21:27:52-PDT
From: Stuart Reges <REGES@SU-SCORE.ARPA>
Subject: Use of CSD machines for coursework
To: faculty@SU-SCORE.ARPA
Office: Margaret Jacks 210, 497-9798
The Department can obtain some limited funding from the University for
providing computer support for classes. Betty and I are putting
together requests for funding and need to know which CSD classes we plan
to provide computing support for.
Just to be clear, let me say that this is independent of the issue
discussed at the faculty meeting of providing computer support for our
PhD students. The University will pay a certain amount of money to
support computing for all members of a given class.
We are requesting funding from two different University sources and the
criteria for each are different:
1. H&S has funds to support classes that require computing
resources unique to CSD. An example would be a course that
requires the use of a program that runs only on SAIL. LOTS
and CIT are unable to provide that resource, so the
University must spend its money here. Last year we obtained
$10,000 for such classes.
2. Bob Street, the new Vice-Provost for Academic Computing, is
interested in buying computing time from CSD to provide
``peaking power'' for LOTS. CSD is willing to do so,
naturally, only if we can accomplish it with a minimum of
hassle for us. We don't want hordes of students using our
terminals and we don't want to distribute keys to the
building. The ideal is a small class composed mostly of CS
students that puts high demand on LOTS at peak times.
I'm sure that the demand for this kind of support is going to exceed the
availability of funds. Gene has asked me to obtain ``applications'' for
courses being taught this academic year. If you are teaching a course
that you think falls in one of the above categories, please give me a
brief statement outlining your reasons. Gene will make the final
decision of which courses we will support on CSD machines.
Please get me your applications by Friday, October 14th. We need to
make our request for money now.
--Stuart
-------
∂08-Oct-83 0025 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #33
Received: from SU-SCORE by SU-AI with TCP/SMTP; 8 Oct 83 00:25:34 PDT
Date: Friday, October 7, 1983 10:45PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #33
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Saturday, 8 Oct 1983 Volume 1 : Issue 33
Today's Topics:
Queries - Good Implementations & Equality,
Representation - More Retract
----------------------------------------------------------------------
Date: Friday, 30-Sep-83 14:20:53-BST
From: Bundy HPS (on ERCC DEC-10) <Bundy@EDXA>
Subject: Good Prolog Implementations
Our DEC-10 is soon to close and I am looking for a replacement
machine which can run Prolog with a performance at least as
good. Do you know of a machine with an existing implementation
of Prolog capable of 40K LIPS or better? Do you know of plans
to mount such an implementation? If so, can you give me further
details of supplier, price, etc. and of how good the Prolog
environment is?
-- Alan Bundy
------------------------------
Date: Monday, 3-Oct-83 10:30:33-BST
From: Luis HPS (on ERCC DEC-10) <LEJm@EDXA>
Subject: DEC-10 Prolog Version Of Kornfeld's Prolog-With-Equality
Someone submitted to the net a few weeks (months ?) ago a DEC-10
Prolog version of some of Bill Kornfeld's code to compute with
partially-instantiated objects ("omega" objects). I thought I
had archived it but can't find where and I am still interested
in experimenting with it. Does anyone out there have it?
Thanks,
-- Luis Jenkins
LEJm%EDXA@UCL-CS
------------------------------
Date: Wednesday, 5-Oct-83 22:16:36-BST
From: Richard HPS (on ERCC DEC-10) <OKeefe.R.A.@EDXA>
Subject: More On Retract
A Problem with Using Retract for Database Update, and a
Suggested Utility for Avoiding the Problem.
It has been shown (I forget by whom) that Prolog augmented
with the setof/3 primitive is relationally complete. (I
do not know if this is true with findall; I suspect that
it may not be.) This means that you can expect it to do
the right thing with data base queries. But what about
data base updates ?
Suppose we have a data base with the following relations:
boss(Worker, /* -> */ Boss)
pay(Worker, /* -> */ Wage)
where Bosses are a kind of Worker, and may have Bosses of their own,
and we want to give any Boss who currently gets less than one of his
own workers a 10% raise. Note that after this update he may still
get less than one of his workers, and he may now get more than his
boss, and his boss may not have got a pay rise. Never mind if this
is sensible, let's just take it as a specification.
We start by defining the new pay relation.
underpaid(Worker) :-
pay(Worker, Wage),
boss(Subordinate, Worker),
pay(Subordinate, Bigger),
Bigger > Wage.
new←pay(Worker, Pay) :-
pay(Worker, Wage),
( underpaid(Worker) -> Pay is Wage*11/10
| Pay = Wage
).
Our task is to replace each pay(W,P) tuple by the corresponding
new←pay(W,P) tuple.
The first approach that occurs to everybody is
:- forall(new←pay(Worker, Pay),
retract(pay(Worker, ←)) & assert(pay(Worker, Pay))).
where the standard predicate forall/2 is defined as
forall(Generator, Test) :-
\+ (Generator, \+ Test).
If you thought this would work, GOTCHA! The problem is exactly
like the error in
for i := 1 to n do a[i+1] := a[i];
as a means of shifting the contents of a[]. Some of the new←pay
tuples are calculated using new←pay tuples when they should have
been calculated using pay tuples. (There are specifications
where this is the correct thing to do. But this isn't one of
them.) The operation we really want is: given a way of computing
new tuples, replace an old relation by a new one, where the new
tuples are calculated only from the old ones.
There is another problem where this crops up. If you want to given
everyone a 10% raise,
:- forall(retract(pay(Worker, Old)) & New is (Old*11+5)/10,
assert(pay(Worker, New)) ).
is a very natural and very silly thing to do. Because retract will
see the new clauses, and after giving everyone a 10% raise it will
give them another, and another, and another... The obvious way of
getting round it is to use asserta instead of assertz, which has
the side effect of reversing the order of the tuples. If we had a
solution to the other problem, we could use it here too.
There is bound to be a better solution. It probably won't involve
assert and retract at all. But this one seems to work. The idea
is that we have a relation p(X1,...,Xn) and a rule for computing
new tuples, let's call it new←p(X1,...,Xn). What we do is call
update(p(X1,...,Xn), new←p(X1,...,Xn))
where
:- public update/2.
:- mode update(+, +).
update(Template, Generator) :-
recorda(., ., Ref),
( call(Generator), recorda(., Template, ←), fail
; functor(Template, Functor, Arity),
abolish(Functor, Arity), % delete old relation
recorded(., Term, DbRef),
erase(DbRef),
( DbRef == Ref % we've come to the end
; asserta(Term), fail % copy new tuple
)
), !.
This isn't beautiful. It has very nearly the maximum number of
blemishes possible in a single clause. Basically, it works
almost exactly like find←all. It generates the new tuples in a
failure-driven loop, and tucks them away in a safe place in
reverse order. It then deletes the old relation, and in another
failure-driven loop pulls the new tuples out and stores them
in reverse order. The two reversals mean that the new tuples
will appear in the data-base in the order that they were
generated. The final cut is necessary in case there are other
facts stored under '.' and the caller tries to backtrack into
update, otherwise we might delete '.' facts that are none of
our business.
With this new operation, our two examples are easily handled.
:- update(pay(W, P), new←pay(W, P)).
:- update(pay(W, N), pay(W, O) & N is (O*11+5)/10).
Also, it can be used to create new stored relations, E.g.
:- update(coworker(X, Y), boss(X, B) & boss(Y, B)).
Just like findall, setof, bagof, this operation has a horribly
ugly implementation in terms of asserts and retracts, but is
itself a clean abstraction. Also, if you are building a new
Prolog system from scratch, there is no reason why setof, bagof,
or the data collection phase of update have to use the data-base
at all. Only the final replacement in update needs to change
the data base, and replacing an entire relation could have
much less impact on the rest of the implementation than the
current form of assert and retract do.
If you were using retract to implement relational database
update, this will probably replace most of your retracts.
But there are other uses of assert and retract than this,
and I still don't know what to do about them. I thoroughly
enjoy seeing my name in the Prolog Digest, but I'd rather
see other people's solutions than my questions. Please
tell us about the lovely operation you use instead of assert
and retract; I have lots of recordz-s and erase-s I would
dearly like to conceal.
------------------------------
End of PROLOG Digest
********************
∂08-Oct-83 1745 BRODER@SU-SCORE.ARPA Speaker needed
Received: from SU-SCORE by SU-AI with TCP/SMTP; 8 Oct 83 17:44:57 PDT
Date: Sat 8 Oct 83 17:45:36-PDT
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Speaker needed
To: aflb.all@SU-SCORE.ARPA
Stanford-Office: MJH 325, Tel. (415) 497-1787
HELP! We need a speaker for the coming Thursday. (Oct. 13)
Don't miss your chance; the next available slot is in the middle of
December!
Thanks, Andrei
-------
∂09-Oct-83 0852 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #34
Received: from SU-SCORE by SU-AI with TCP/SMTP; 9 Oct 83 08:52:13 PDT
Date: Sunday, October 9, 1983 7:23AM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #34
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Sunday, 9 Oct 1983 Volume 1 : Issue 34
Today's Topics:
Implementations - Bagof,
Announcement - New Fellowship Available
----------------------------------------------------------------------
Date: Thursday, 6-Oct-83 13:59:29-BST
From: Richard HPS (on ERCC DEC-10) <OKeefe.R.A.@EDXA>
Subject: Bagof - Falsely So-Called
I have just received the manual for yet another Prolog interpreter.
It is one of the closest to DEC-10 Prolog, and its differences are
well-motivated. So why am I furious?
Because the manual lists a predicate bagof/3, and almost the only
thing which is clear from the 2-line description is that it is no
such thing. It is findall. (Indeed the description calls it
findall.) I do not claim that the author of the program is
deliberately committing fraud. I cannot know whether it was he
himself who wrote the manual, or whether another "improved" it
before it reached me.
findall is a special case of bagof. bagof and setof are described
in the paper "Higher-Order Extensions to Prolog, are they Needed?"
by David Warren and the current DEC-10 Prolog manual by David Bowen,
both available from DAI Edinburgh (see almost any issue of SigArt
to find out how to get them). The big question is "what happens to
unbound variables in the generator".
The classic example is
likes(bill, baroque).
likes(bill, jazz).
likes(fred, jazz).
likes(fred, rock).
?- findall(X, likes(X,Y), Likers).
=> X = ←0
Y = ←1
Likers = [bill,bill,fred,fred]
?- bagof(X, likes(X,Y), Likers).
=> X = ←0
Y = baroque
Likers = [bill] ;
X = ←0
Y = jazz
Likers = [bill,fred] ;
X = ←0
Y = rock
Likers = [fred]
If you want the findall interpretation, you can use the
"existential quantifier" ↑ and write
?- bagof(X, Y↑likes(X,Y), Likers).
and get the same answer as findall (in fact if all the
otherwise unbound variables are existentially quantified
bagof performs exactly the same steps as findall).
bagof can be used to simulate findall, but findall cannot be
directly used to simulate bagof. You have to collect all
the variables into your list, group together the solutions
which imatch on the universally quantified variables (in
this case Y) -- this is why DEC-10 Prolog and C-Polog have
the predicate keysort/2 -- and backtrack through the matched
segments. This can be, and is, done in Prolog. You have to
be a bit careful with some of the boundary cases, and with
strange things turning up in the generator, but anyone who
is capable of writing say a subsumption checker (not using
numbervars) is capable of writing a correct version of bagof
and setof. The main difference between bagof and setof is
which sorting routine they use. There are fewer easier
predicates to write than a merge sort, and merge sort seems
to be the most efficient possible sorting routine in Prolog.
setof is particularly important for data-base work. The main
reason I am sure that the bagof described in the manual for
this otherwise excellent Prolog is not bagof is that once you
have written the REAL bagof (admittedly not a trivial 10-line
thing like findall) you're 99% of the way to setof, and the
manual doesn't mention it.
I'm fairly confident that if I had bought a copy of the interpreter,
stating in my order that I wanted it because of bagof, that I
would be able to prove fraud in court. I am not in fact accusing
the author of this system of having any fraudulent intent, nor am
I alleging that he is incapable of implementing the real bagof
(and there's the tragedy, why didn't he), nor am I alleging that
the interpreter is not value for money. I'm just saying, in
rather strong language:
- calling "findall" "bagof" is seriously misleading
- it is unhelpful to people who have read Clocksin &
Mellish [as they'll look for "findall" and not know
about "bagof"]
- it is positively damaging to people who want to use
the real thing [as they may not at first realise why
their correct and accepted program doesn't work, and
will then be unable to replace bagof with the correct
definition without hacking the interpreter]
- it is unnecessary
So please, Prolog implementors, DON'T DO IT !
------------------------------
Date: Friday, 7-Oct-83 17:18:37-BST
From: Rae FHL (on ERCC DEC-10) <RAE@EDXA>
Subject: New Post at Edinburgh AI
Department of Artificial Intelligence
University of Edinburgh
Research Fellow
A Research Fellowship is available within the Programming Systems
Development Group. This post, funded by the Science and Engineering
Research Council for a period of two years, is to provide a high
performance Prolog system for workers in Intelligent Knowledge Based
Systems.
A good knowledge of the C programming language and UNIX will be
required. Previous experience in implementing and mounting
language systems, and an acquaintance with Prolog, would be an
advantage.
Applicants should have a PhD in a relevant area or equivalent
industrial experience. The appointment will be made on the IA
salary range, 7190 - 11615 pounds sterling, according to age
and experience. The post is funded for a period of two years
from the date of appointment.
Further particulars of the post can be obtained from:
Administrative Assistant
Department of Artificial Intelligence
University of Edinburgh
Forrest Hill
Edinburgh EH1 2QL
SCOTLAND
phone
031-667-1011 x2554
or, by ARPAnet
Rae%EDXA@UCL-CS
------------------------------
End of PROLOG Digest
********************
∂10-Oct-83 1544 GOLUB@SU-SCORE.ARPA Exciting application
Received: from SU-SCORE by SU-AI with TCP/SMTP; 10 Oct 83 15:43:51 PDT
Date: Mon 10 Oct 83 15:43:49-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Exciting application
To: Su-bboards@SU-SCORE.ARPA, faculty@SU-SCORE.ARPA
Dr Don Kristtt who is director of neuropathology is setting up
a computer microscope and would like students who are interested
in joining his venture. It seems to involve data base management and AI.
His number is 7-6041.
GENE
-------
∂10-Oct-83 1623 LAWS@SRI-AI.ARPA AIList Digest V1 #72
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Oct 83 16:22:34 PDT
Date: Monday, October 10, 1983 10:16AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #72
To: AIList@SRI-AI
AIList Digest Monday, 10 Oct 1983 Volume 1 : Issue 72
Today's Topics:
Administrivia - AIList Archives,
Music & AI - Request,
NL - Semantic Chart Parsing & Simple English Grammar,
AI Journals - Address of "Artificial Intelligence",
Alert - IEEE Computer Issue,
Seminars - Stanfill at Univ. of Maryland, Zadeh at Stanford,
Commonsense Reasoning
----------------------------------------------------------------------
Date: Sun 9 Oct 83 18:03:24-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: AIList Archives
The archives have grown to the point that I can no longer
keep them available online. I will keep the last three month's
issues available in <ailist>archive.txt on SRI-AI. Preceding
issues will be backed up on tape, and will require about a
day's notice to recover. The tape archive will consist of
quarterly composites (or smaller groupings, if digest activity
gets any higher than it has been). The file names will be of
the form AIL1N1.TXT, AIL1N19.TXT, etc. All archives will be in
the MMAILR mailer format.
The online archive may be obtained via FTP using anonymous login.
Since a quarterly archive can be very large (up to 300 disk pages)
it will usually be better to ask me for particuar issues than to
FTP the whole file.
-- Ken Laws
------------------------------
Date: Thu, 25 Aug 83 00:07:53 PDT
From: uw-beaver!utcsrgv!nixon@LBL-CSAM
Subject: AIList Archive- Univ. of Toronto
[I previously put out a request for online archives that could
be obtained by anonymous FTP. There were very few responses.
Perhaps this one will be of use. -- KIL]
Dear Ken,
Copies of the AIList Digest are kept in directory /u5/nixon/AIList
with file names V1.5, V1.40, etc. Our uucp site name is "utcsrgv".
This is subject to change in the very near future as the AI group at the
University of Toronto will be moving to a new computer.
Brian Nixon.
------------------------------
Date: 4 Oct 83 9:23:38-PDT (Tue)
From: hplabs!hao!cires!nbires!ut-sally!riddle @ Ucb-Vax
Subject: Re: Music & AI, pointers wanted
Article-I.D.: ut-sally.86
How about posting the results of the music/ai poll to the net? There
have been at least two similar queries in recent memory, indicating at
least a bit of general interest.
[...]
-- Prentiss Riddle
{ihnp4,kpno,ctvax}!ut-sally!riddle
riddle@ut-sally.UUCP
------------------------------
Date: 5 Oct 83 19:54:32-PDT (Wed)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: Re: NL argument between STLH and Per - (nf)
Article-I.D.: uiucdcs.3132
I've heard of "syntactic chart parsing," but what is "semantic chart
parsing?" It sounds interesting, and I'd like to hear about it.
I'm also interested in seeing your paper. Please make arrangements with me
via net mail.
Rick Dinitz
U. of Illinois
...!uicsl!dinitz
------------------------------
Date: 3 Oct 83 18:39:00-PDT (Mon)
From: pur-ee!ecn-ec.davy @ Ucb-Vax
Subject: WANTED: Simple English Grammar - (nf)
Article-I.D.: ecn-ec.1173
Hello,
I am looking for a SIMPLE set of grammar rules for English. To
be specific, I'm looking for something of the form:
SENT = NP + VP ...
NP = DET + ADJ + N ...
VP = ADV + V + DOBJ ...
etc.
I would prefer a short set of rules, something on the order of one or two
hundred lines. I realize that this isn't enough to cover the whole English
language, I don't want it to. I just want something which could handle
"simple" sentences, such as "The cat chased the mouse", etc. I would like
to have rules for questions included, so that something like "What does a
hen weigh?" can be covered.
I've scoured our libraries here, and have only found one book with
a grammar for English in it, and it's much more complex than what I want.
Any pointers to books/magazines or grammars themselves would be greatly
appreciated.
Thanks in advance (as the saying goes)
--Dave Curry
decvax!pur-ee!davy
eevax.davy@purdue
------------------------------
Date: 6 Oct 83 17:21:29-PDT (Thu)
From: ihnp4!cbosgd!cbscd5!lvc @ Ucb-Vax
Subject: Address of "Artificial Intelligence"
Article-I.D.: cbscd5.739
Here is the address of "Artificial Intelligence" if anyone is interested:
Artificial Intelligence (bi-monthly $136 -- Ouch !)
North-Holland Publishing Co.,
Box 211, 1000 AE
Amsterdam, Netherlands.
Editors D.G. Bobrow, P.J. Hayes
Advertising, book reviews, circulation 1,100
Also avail. in microform from
Microforms International Marketing Co.
Maxwell House
Fairview Park
Elmsford NY 10523
Indexed: Curr. Cont.
Larry Cipriani
cbosgd!cbscd5!lvc
[There is a reduced rate for members of AAAI. -- KIL]
------------------------------
Date: Sun 9 Oct 83 17:45:52-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: IEEE Computer Issue
Don't miss the October 1983 issue of IEEE Computer. It is a
special issue on knowledge representation, and includes articles
on learning, logic, and other related topics. There is also a
short list of 30 expert system on p. 141.
------------------------------
Date: 8 Oct 83 04:18:04 EDT (Sat)
From: Bruce Israel <israel%umcp-cs@UDel-Relay>
Subject: University of Maryland AI talk
[Reprinted from the University of Maryland BBoard]
The University of Maryland Computer Science Dept. is starting an
informal AI seminar, meeting every other Thursday in Room 2330,
Computer Science Bldg, at 5pm.
The first meeting will be held Thursday, October 13. All are welcome
to attend. The abstract for the talk follows.
MAL: My AI Language
Craig Stanfill
Department of Computer Science
University of Maryland
College Park, MD 20742
In the course of writing my thesis, I implemented an AI language, called
MAL, for manipulating symbolic expressions. MAL runs in the University of
Maryland Franz Lisp Environment on a VAX 11/780 under Berkely Unix (tm) 4.1.
MAL is of potential benefit in knowledge representation research, where it
allows the development and testing of knowledge representations without build-
ing an inference engine from scratch, and in AI education, where it should
allow students to experiment with a simple AI programming language. MAL pro-
vides for:
1. The representation of objects and queries as symbolic expressions.
Objects are recursively constructed from sets, lists, and bags of atoms
(as in QLISP). A powerful and efficient pattern matcher is provided.
2. The rule-directed simplification of expressions. Limited facilities for
depth first search are provided.
3. Access to a database. Rules can assert and fetch simplifications of
expressions. The database also employs a truth maintenance system.
4. The construction of large AI systems by the combination of simpler modules
called domains. For each domain, there is a database, a set of rules, and
a set of links to other domains.
5. A set of domains which are generally useful, especially for spatial rea-
soning. This includes domains for solid and linear geometry, and for
algebra.
6. Facilities which allow the user to customize MAL (to a degree). Calls to
arbitrary LISP functions are supported, allowing the language to be easily
extended.
------------------------------
Date: Thu 6 Oct 83 20:18:09-PDT
From: Doug Lenat <LENAT@SU-SCORE.ARPA>
Subject: Colloquium Oct 11: ZADEH
[Reprinted from the SU-SCORE bboard.]
Professor Lotfi Zadeh, of UCB, will be giving the CS colloquium this
Tuesday (10/11). As usual, it will be in Terman Auditorium, at 4:15
(preceded at 3:45 by refreshments in the 3rd floor lounge of Margaret
Jacks Hall).
The title and abstract for the colloquium are as follows:
Reasoning With Commonsense Knowledge
Commonsense knowledge is exemplified by "Glass is brittle," "Cold is
infectious," "The rich are conservative," "If a car is old, it is
unlikely to be in good shape," etc. Such knowledge forms the basis
for most of human reasoning in everyday situations.
Given the pervasiveness of commonsense reasoning, a question which
begs for answer is: Why is commonsense reasoning a neglected area in
classical logic? Because, almost by definition, commonsense
knowledge is that knowledge which is not representable as a
collection of well-formed formulae in predicate logic or other
logical systems which have the same basic conceptual structure as
predicate logic.
The approach to commonsense reasoning which is described in the talk
is based on the use of fuzzy logic -- a logic which allows the use of
fuzzy predicates, fuzzy quantifiers and fuzzy truth-values. In this
logic, commonsense knowledge is defined to be a collection of
dispositions, that is propositions with suppressed fuzzy quantifiers.
To infer from such knowledge, three basic syllogisms are developed:
(1) the intersection/product syllogism; (2) the consequent
conjunction syllogism; and (3) the antecedent conjunction syllogism.
The use of these syllogisms in commonsense reasoning and their
application to the combination of evidence in expert systems is
discussed and illustrated by examples.
------------------------------
Date: Fri 7 Oct 83 09:42:30-PDT
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM>
Subject: "rich" = "conservative" ?
[Reprinted from the SU-SCORE bboard.]
Subject: Colloquium Oct 11: ZADEH
The title and abstract for the colloquium are as follows:
Reasoning With Commonsense Knowledge
I don't think I've seen flames in response to abstracts before, but I get
so sick of hearing "rich," "conservative," and "evil" used as synonyms.
Commonsense knowledge is exemplified by [...] "The rich are
conservative," [...].
In fact, in the U.S., 81% of people with incomes over $50,000 are
registered Democrats. Only 47% with incomes under $50,000 are. (The
remaining 53% are made up of "independents," &c..) The Democratic
Party gets the majority of its funding from contributions of over
$1000 apiece. The Republican Party is mostly funded by contributions
of $10 and under. (Note: I'd be the last to equate Conservatism and
the Republican Party. I am a Tory and a Democrat. However, more
"commonsense knowledge" suggests that I can use the word "Republican"
in place of "conservative" for the purpose of refuting the equation
of "rich" and "conservative."
Such knowledge forms the basis for most of human reasoning in everyday
situations.
This statement is so true that it is the reason I gave up political writing.
Given the pervasiveness of commonsense reasoning, a question which
begs for answer is: Why is commonsense reasoning a neglected area in
classical logic? [...]
Perhaps because false premeses tend to give rise to false conclusions? Just
what we need--"ignorant systems." (:-)
--Christopher
------------------------------
Date: Fri 7 Oct 83 10:22:37-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM>
Subject: Re: "rich" = "conservative" ?
[Reprinted from the SU-SCORE bboard.]
Why is logic a neglected area in commonsense reasoning? (to say nothing of
political writing)?
More seriously, or at least more historically, a survey was once taken of
ecological and other pressure groups in England, asking them which had been the
most and least effective methods they had used to convince governmental bodies.
Right at the bottom of the list of "least effective" was Reasoned Argument.
- Richard
------------------------------
Date: Fri, 7 Oct 83 10:36 PDT
From: Vaughan Pratt <pratt@Navajo>
Subject: Reasoned Argument
[Reprinted from the SU-SCORE bboard.]
[...]
I think if "Breathing" had been on the least along with "Reasoned
Argument" then the latter would only have come in second last.
It is not that reasoned argument is ineffective but that it is on
a par with breathing, namely something we do subconsciously. Consciously
performed reasoning is only marginally reliable in mathematical circles,
and quite unreliable in most other areas. It makes most people dizzy,
much as consciously performed breathing does.
-v
------------------------------
End of AIList Digest
********************
∂10-Oct-83 2157 LAWS@SRI-AI.ARPA AIList Digest V1 #73
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Oct 83 21:55:56 PDT
Delivery-Notice: While sending this message to SU-AI.ARPA, the
SRI-AI.ARPA mailer was obliged to send this message in 50-byte
individually Pushed segments because normal TCP stream transmission
timed out. This probably indicates a problem with the receiving TCP
or SMTP server. See your site's software support if you have any questions.
Date: Monday, October 10, 1983 4:17PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #73
To: AIList@SRI-AI
AIList Digest Tuesday, 11 Oct 1983 Volume 1 : Issue 73
Today's Topics:
Halting Problem,
Conciousness,
Rational Psychology
----------------------------------------------------------------------
Date: Thu 6 Oct 83 18:57:04-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Halting problem discussion
This discussion assumes that "human minds" are at least equivalent
to Universal Turing Machines. If they are restricted to computing
smaller classes of recursive functions, the question dissolves.
Sequential computers are idealized as having infinite memory because
that makes it easier to study mathematically asymptotic behavior. Of
course, we all know that a more accurate idealization of sequential
computers is the finite automaton (for which there is no halting
problem, of course!).
The discussion on this issue seemed to presuppose that "minds" are the
same kind of object as existing (finite!) computing devices. Accepting
this presupposition for a moment (I am agnostic on the matter), the
above argument applies and the discussion is shown to be vacuous.
Thus fall undecidability arguments in psychology and linguistics...
Fernando Pereira
PS. Any silliness about unlimited amounts of external memory
will be profitably avoided.
------------------------------
Date: 7 Oct 83 1317 EDT (Friday)
From: Robert.Frederking@CMU-CS-A (C410RF60)
Subject: AI halting problem
Actually, this isn't a problem, as far as I can see. The Halting
Problem's problem is: there cannot be a program for a Turing-equivalent
machine that can tell whether *any* arbitrary program for that machine will
halt. The easiest proof that a Halts(x) procedure can't exist is the
following program: (due to Jon Bentley, I believe)
if halts(x) then
while true do print("rats")
What happens when you start this program up, with itself as x? If
halts(x) returns true, it won't halt, and if halts(x) returns false, it
will halt. This is a contradiction, so halts(x) can't exist.
My question is, what does this have to do with AI? Answer, not
much. There are lots of programs which always halt. You just can't
have a program which can tell you *for* *any* *program* whether it will
halt. Furthermore, human beings don't want to halt, i.e., die (this
isn't really a problem, since the question is whether their mental
subroutines halt).
So as long as the mind constructs only programs which will
definitely halt, it's safe. Beings which aren't careful about this
fail to breed, and are weeded out by evolution. (Serves them right.)
All of this seems to assume that people are Turing-equivalent (without
pencil and paper), which probably isn't true, and certainly hasn't been
proved. At least I can't simulate a PDP-10 in my head, can you? So
let's get back to real discussions.
------------------------------
Date: Fri, 7 Oct 83 13:05:16 CDT
From: Paul.Milazzo <milazzo.rice@Rand-Relay>
Subject: Looping in humans
Anyone who believes the human mind incapable of looping has probably
never watched anyone play Rogue :-). The success of Rogomatic (the
automatic Rogue-playing program by Mauldin, et. al.) demonstrates that
the game can be played by deriving one's next move from a simple
*fixed* set of operations on the current game state.
Even in the light of this demonstration, Rogue addicts sit hour after
hour mechanically striking keys, all thoughts of work, food, and sleep
forgotten, until forcibly removed by a girl- or boy-friend or system
crash. I claim that such behavior constitutes looping.
:-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-)
Paul Milazzo <milazzo.rice@Rand-Relay>
Dept. of Mathematical Sciences
Rice University, Houston, TX
P.S. A note to Rogue fans: I have played a few games myself, and
understand the appeal. One of the Rogomatic developers is a
former roommate of mine interested in part in overcoming the
addiction of rogue players everywhere. He, also, has played
a few games...
------------------------------
Date: 5 Oct 83 9:55:56-PDT (Wed)
From: hplabs!hao!seismo!philabs!cmcl2!floyd!clyde!akgua!emory!gatech!owens
@ Ucb-Vax
Subject: Re: a definition of consciousness?
Article-I.D.: gatech.1379
I was doing required reading for a linguistics class when I
came across an interesting view of consciousness in "Foundations
of the Theory of Signs", by Charles Morris, section VI, subsection
12, about the 6th paragraph (Its also in the International
Encyclopedia of Unified Science, Otto Neurath, ed.).
to say that Y experiences X is to define a relation E of which
Y is the domain and X is the range. Thus, yEx says that it is true
that y experiences x. E does not follow normal relational rules
(not transitive or symmetric. I can experience joe, and joe can
experience fred, but it's not nessesarily so that I thus experience
fred.) Morris goes on to state that yEx is a "conscious experience"
if yE(yEx) ALSO holds, otherwise it's an "unconscious experience".
Interesting. Note that there is no infinite regress of
yE(yE(yE....)) that is usually postulated as being a consequence of
computer consciousness. However the function that defines E is defined,
it only needs to have the POTENTIAL of being able to fit yEx as an x in
another yEx, where y is itself. Could the fact that the postulated
computer has the option of NOT doing the insertion be some basis for
free will??? Would the required infinite regress of yE(yE(yE....
manifest some sort of compulsiveness that rules out free will?? (not to
say that an addict of some sort has no free will, although it's worth
thinking about).
Question: Am I trivializing the problem by making the problem of
consiousness existing or not being the ability to define the relation
E? Are there OTHER questions that I haven't considered that would
strengthen or weaken that supposition? No flames, please, since this
ain't a flame.
G. Owens
at gatech CSNET.
------------------------------
Date: 6 Oct 83 9:38:19-PDT (Thu)
From: ihnp4!ihuxr!lew @ Ucb-Vax
Subject: towards a calculus of the subjective
Article-I.D.: ihuxr.685
I posted some articles to net.philosophy a while back on this topic
but I didn't get much of rise out of anybody. Maybe this is a better
forum. (Then again, ...) I'm induced to try here by G. Owens article,
"Re: definition of consciousness".
Instead of trying to formulate a general characteristic of conscious
experience, what about trying to characterize different types of subjective
experience in terms of their physical correlates? In particular, what's
the difference between seeing a color (say) and hearing a sound? Even
more particularly, what's the difference between seeing red, and seeing blue?
I think the last question provides a potential experimental test of
dualism. If it could be shown that the subjective experience of a red
image was constituted by an internal set of "red" image cells, and similarly
for a blue image, I would regard this as a proof of dualism. This is
assuming the "red" and "blue" cells to be physically equivalent. The
choice between which were "red" and which were "blue" would have no
physical basis.
On the other hand, suppose there were some qualitative difference in
the firing patterns associated with seeing red versus seeing blue.
We would have a physical difference to hang our hat on, but we would
still be left with the problem of forming a calculus of the subjective.
That is, we would have to figure out a way to deduce the type of subjective
experience from its physical correlates.
A successful effort might show how to experience completely new colors,
for example. Maybe our restriction to a 3-d color space is due to
the restricted stimulation of subjective color space by three inputs.
Any acid heads care to comment?
These thoughts were inspired by Thomas Nagel's "What is it like to be a bat?"
in "The Minds I". I think the whole subjective-objective problem is
given short shrift by radical AI advocates. Hofstadter's critique of
Nagel's article was interesting, but I don't think it addressed Nagel's
main point.
Lew Mammel, Jr. ihuxr!lew
------------------------------
Date: 6 Oct 83 10:06:54-PDT (Thu)
From: ihnp4!zehntel!tektronix!tekecs!orca!brucec @ Ucb-Vax
Subject: Re: Parallelism and Physiology
Article-I.D.: orca.179
-------
Re the article posted by Rik Verstraete <rik@UCLA-CS>:
In general, I agree with your statements, and I like the direction of
your thinking. If we conclude that each level of organization in a
system (e.g. a conscious mind) is based in some way on the next lower
level, it seems reasonable to suppose that there is in some sense a
measure of detail, a density of organization if you will, which has a
lower limit for a given level before it can support the next level.
Thus there would be, in the same sense, a median density for the
levels of the system (mind), and a standard deviation, which I
conjecture would be bounded in any successful system (only the top
level is likely to be wildly different in density, and that lower than
the median).
Maybe the distinction between the words learning and
self-organization is only a matter of granularity too. (??)
I agree. I think that learning is simply a sophisticated form of
optimization of a self-organizing system in a *very* large state
space. Maybe I shouldn't have said "simply." Learning at the level of
human beings is hardly trivial.
Certainly, there are not physically two types of memories, LTM
and STM. The concept of LTM/STM is only a paradigm (no doubt a
very useful one), but when it comes to implementing the concept,
there is a large discrepancy between brains and machines.
Don't rush to decide that there aren't two mechanisms. The concepts of
LTM and STM were developed as a result of observation, not from theory.
There are fundamental functional differences between the two. They
*may* be manifestations of the same physical mechanism, but I don't
believe there is strong evidence to support that claim. I must admit
that my connection to neurophysiology is some years in the past
so I may be unaware of recent research. Does anyone out there have
references that would help in this discussion?
------------------------------
Date: 7 Oct 83 15:38:14-PDT (Fri)
From: harpo!floyd!vax135!ariel!norm @ Ucb-Vax
Subject: Re: life is but a dream
Article-I.D.: ariel.482
re Michael Massimilla's idea (not original, of course) that consciousness
and self-awareness are ILLUSIONS. Where did he get the concept of ILLUSION?
The stolen concept fallacy strikes again! This fallacy is that of using
a concept while denying its genetic roots... See back issues of the Objectivist
for a discussion of this fallacy.... --Norm on ariel, Holmdel, N.J.
------------------------------
Date: 7 Oct 83 11:17:36-PDT (Fri)
From: ihnp4!ihuxr!lew @ Ucb-Vax
Subject: life is but a dream
Article-I.D.: ihuxr.690
Michael Massimilla informs us that consciousness and self-awareness are
ILLUSIONS. This is like saying "It's all in your mind." As Nietzsche said,
"One sometimes remains faithful to a cause simply because its opponents
do not cease to be insipid."
Lew Mammel, Jr. ihuxr!lew
------------------------------
Date: 5 Oct 83 1:07:31-PDT (Wed)
From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: Rational Psychology
Article-I.D.: ncsu.2357
Someone's recent attempt to make the meaning of "Rational Psychology" seem
trivial misses the point a number of people have made in commenting on the
odd nature of the name. The reasoning was something like this:
1) rational "X" means the same thing in spite of what "X" is.
2) => rational psychology is a clear and simple thing
3) wake up guys, youre being dumb.
Well, I think this line misses at least one point. The argument above
is probably sound provided one accepts the initial premise, which I do not
neccessarily accept. Another example of the logic may help.
1) Brute Force elaboration solve problems of set membership. E.g. just
look at the item and compare it with every member of the set. This
is a true statement for a wide range of possible sets.
2) Real Numbers are a kind of set.
3) Wake up Cantor, you're wasting (or have wasted) your time.
It seems quite clear that in the latter example, the premise is naive and
simply fails to apply to sets of infinite proportions. (Or more properly
one must go to some effort to justify such use.)
The same issue applies to the notion of Rational Psychology. Does it make
sense to attempt to apply techniques which may be completely inadequate?
Rational analysis may fail completely to explain the workings of the mind,
esp when we are looking at the "non-analytic" capabilities that are
implied by psychology. We are on the edge of a philosophical debate, with
terms like "dual-ism" and "phsical-ism" etc marking out party lines.
It may be just as ridiculous to some people to propose a rational study
of psychology as it seems to most of us that one use finite analysis
to deal with trans-finite cardinalities [or] as it seems to some people to
propose to explain the mind via physics alone. Clearly, the people who
expect rational analytic method to be fruitful in the field of psychology
are welcome to coin a new name for themselve. But if they, or anyone else
has really "Got it now" please write a dissertation on the subject and
enter history along side Kant, St Thomas Aquinus, Kierkergard ....
----GaryFostel----
------------------------------
Date: 4 Oct 83 8:54:09-PDT (Tue)
From: decvax!linus!philabs!seismo!rlgvax!cvl!umcp-cs!velu @ Ucb-Vax
Subject: Rational Psychology - Gary Fostel's message
Article-I.D.: umcp-cs.2953
Unfortunately, however, many pet theories in Physics have come about as
inspirations, and not from the "technical origins" as you have stated!
(What is a "technical origin", anyway????)
As I see it, in any science a pet theory is a combination of insight,
inspiration, and a knowledge of the laws governing that field. If we
just went by known facts, and did not dream on, we would not have
gotten anywhere!
- Velu
-----
Velu Sinha, U of MD, College Park
UUCP: {seismo,allegra,brl-bmd}!umcp-cs!velu
CSNet: velu@umcp-cs ARPA: velu.umcp-cs@UDel-Relay
------------------------------
Date: 6 Oct 83 12:00:15-PDT (Thu)
From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: Intuition in Physics
Article-I.D.: ncsu.2360
Some few days ago I suggested that there was something "different"
about psychology and tried to draw a distinction between the flash
of insight or the pet theory in physics as compared to psychology.
Well, someone else commented on the original, in a way that sugested
I missed the mark in my original effort to make it clear. One more time:
I presume that at birth, one's mind is not predisposed to one or another
of several possible theories of heavy molecule collision (for example.)
Further, I think it unlikely that personal or emotional interaction in
one "pre-analytic" stage (see anything about developmental psych.) is
is likely to bear upon one's opinions about those molecules. In fact I
find it hard to believe that anything BUT technical learning is likely
to bear on one's intuition about the molecules. One might want to argue
that one's personality might force you to lean towards "aggressive" or
overly complex theories, but I doubt that such effects will lead to
the creation of a theory. Only a rather mild predisposition at best.
In psychology it is entirely different. A person who is aggresive has
lots of reasons to assume everyone else is as well. Or paranoid, or
that rote learning is esp good or bad, or that large dogs are dangerous
or a number of other things that bear directly on one's theories of the
mind. And these biases are aquired from the process of living and are
quite un-avoidable. This is not technical learning. The effect is
that even in the face of considerable technical learning, one's intuition
or "pet theories" in psychology might be heavily influenced in creation
of the theory as well as selection, by one's life experiences, possibly
to the exclusion of one's technical opinions. (Who knows what goes on in
the sub-conscious.) While one does not encounter heavy molecules often
in one's everyday life or one's childhood, one DOES encounter other people
and more significantly one's own mind.
It seems clear that intuition in physics is based upon a different sort
of knowledge than intuition about psychology. The latter is a combination
of technical AND everyday intuition while the former is not.
----GaryFostel----
------------------------------
End of AIList Digest
********************
∂11-Oct-83 1013 GOLUB@SU-SCORE.ARPA Today's lunch
Received: from SU-SCORE by SU-AI with TCP/SMTP; 11 Oct 83 10:12:55 PDT
Date: Tue 11 Oct 83 10:13:31-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Today's lunch
To: faculty@SU-SCORE.ARPA
L Zadeh will be the guest at lunch today. GENE
-------
∂11-Oct-83 1534 SCHMIDT@SUMEX-AIM LM-2 down Thursday 8am - noon
Received: from SUMEX-AIM by SU-AI with PUP; 11-Oct-83 15:33 PDT
Date: Tue 11 Oct 83 15:36:21-PDT
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM>
Subject: LM-2 down Thursday 8am - noon
To: HPP-Lisp-Machines@SUMEX-AIM
In order to rearrange MJH 433 to accomodate the LM-3600's, the LM-2
is apt to be down Thursday morning for some interval of time roughly between
8 am and noon.
--Christopher
-------
∂11-Oct-83 1539 SCHMIDT@SUMEX-AIM Symbolics chrome
Received: from SUMEX-AIM by SU-AI with PUP; 11-Oct-83 15:39 PDT
Date: Tue 11 Oct 83 15:42:06-PDT
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM>
Subject: Symbolics chrome
To: HPP-Lisp-Machines@SUMEX-AIM
Does anyone know who took the chrome (or plastic) Symbolics trademarks
off of the LM-2 and its disk cabinet? Or why?
--Christopher
-------
∂11-Oct-83 1749 BRODER@SU-SCORE.ARPA Next AFLB talk(s)
Received: from SU-SCORE by SU-AI with TCP/SMTP; 11 Oct 83 17:49:39 PDT
Date: Tue 11 Oct 83 17:50:11-PDT
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Next AFLB talk(s)
To: aflb.all@SU-SCORE.ARPA
cc: sharon@SU-SCORE.ARPA
Stanford-Office: MJH 325, Tel. (415) 497-1787
N E X T A F L B T A L K (S)
10/13/83 - Harry Mairson (Stanford):
"Reporting Line Segment Intersections in the Plane"
Given two sets S and T of line segments in the plane, where no two
line segments in S (similarly, T) intersect, we would like to compute
and report in an efficient manner all pairs (s,t) in S x T of
intersecting line segments. This problem in computational plane
geometry has obvious applications in computer aided design,
particularly in the layout of integrated circuits, as well as in
high-speed computer graphics. We present an algorithm which reports
all intersections (s,t) in O(n*log n + i) time and O(n) space, where n
is the total number of line segments, and i is the number of
intersections. This algorithm can be used to compute the regions of
intersection of two simple polygons or of two embeddings of planar
graphs where the edges are straight lines, and can be used to merge
two such embeddings together.
This is joint work with Jorge Stolfi.
******** Time and place: Oct. 13, 12:30 pm in MJ352 (Bldg. 460) *******
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Regular AFLB meetings are on Thursdays, at 12:30pm, in MJ352 (Bldg.
460).
If you have a topic you would like to talk about in the AFLB seminar
please tell me. (Electronic mail: broder@su-score.arpa, Office: Jacks
Hall 325, 497-1787) Contributions are wanted and welcome. Not all
time slots for the autumn quarter have been filled so far.
For more information about future AFLB meetings and topics you might
want to look at the file [SCORE]<broder>aflb.bboard .
- Andrei Broder
-------
∂11-Oct-83 1950 LAWS@SRI-AI.ARPA AIList Digest V1 #74
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Oct 83 19:49:59 PDT
Date: Tuesday, October 11, 1983 11:25AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #74
To: AIList@SRI-AI
AIList Digest Wednesday, 12 Oct 1983 Volume 1 : Issue 74
Today's Topics:
Journals - AI Journal,
Query - Miller's "Living Systems",
Technology Transfer - DoD Reviews,
Conciousness
----------------------------------------------------------------------
Date: Tue, 11 Oct 83 07:54 PDT
From: Bobrow.PA@PARC-MAXC.ARPA
Subject: AI Journal
The information provided by Larry Cipriani about the AI Journal in the
last issue of AINET is WRONG in a number of important particulars.
Institutional subscriptions to the Artificial Intelligence Journal are
$176 this year (not $136). Personal subscriptions are available
for $50 per year for members of the AAAI, SIGART and AISB. The
circulation is about 2,000 (not 1,100). Finally, the AI journal
consists of eight issues this year, and nine issues next year (not
bimonthly).
Thanks
Dan Bobrow (Editor-in-Chief)
Bobrow@PARC
------------------------------
Date: Mon, 10 Oct 83 15:41 EDT
From: David Axler <Axler.UPenn@Rand-Relay>
Subject: Bibliographic Query
Just wondering if anybody out there has read the book 'Living Systems'
by James G. Miller (Mc Graw - Hill, 1977)., and, if so, whether they feel that
Miller's theories have any relevance to present-day AI research. I won't
even attempt to summarize the book's content here, as it's over 1K pages in
length, but some of the reviews of it that I've run across seem to imply that
it might well be useful in some AI work.
Any comments?
Dave Axler (Axler.Upenn-1100@UPenn@Udel-Relay)
------------------------------
Date: 7 Oct 1983 08:11-EDT
From: TAYLOR@RADC-TOPS20
Subject: DoD "reviews"
I must agree with Earl Weaver's comments on the DoD review of DoD
sponsored publications with one additional comment...since I have
"lived and worked" in that environment for more than six years.
DoD has learned (through experience) that given enough
unclassified material, much classified information can be
deduced. I have seen documents whose individual paragraphs were
unclassified, but when grouped to gether as a single document it
provided too much sensitive information to leave unclassified.
Roz (RTaylor@RADC-MULTICS)
------------------------------
Date: 4 Oct 83 19:25:13-PDT (Tue)
From: ihnp4!zehntel!tektronix!tekcad!ricks @ Ucb-Vax
Subject: Re: Conference Announcement - (nf)
Article-I.D.: tekcad.66
> **************** CONFERENCE ****************
>
> "Intelligent Systems and Machines"
>
> Oakland University, Rochester Michigan
>
> April 24-25, 1984
>
> *********************************************
>
>AUTHORS PLEASE NOTE: A Public Release/Sensitivity Approval is necessary.
>Authors from DOD, DOD contractors, and individuals whose work is government
>funded must have their papers reviewed for public release and more
>importantly sensitivity (i.e. an operations security review for sensitive
>unclassified material) by the security office of their sponsoring agency.
Another example of so called "scientists" bowing to governmental
pressure to let them decide if the paper you want to publish is OK to
publish. I think that this type of activity is reprehensible and as con-
cerned scientists we should do everything in our power to stop this cen-
sorship of research. I urge everyone to boycott this conference and any
others like it which REQUIRE a Public Release/Sensitivty Approval (funny
how the government tries to make censorship palatible with different words,
isn't it). If we don't stop this now, we may be passing every bit of research
we do under the nose of bureaucrats who don't know an expert system from
an accounting package and who have the power to stop publication of anything
they consider dangerous.
I'm mad as hell and I'm not going to
take it anymore!!!!
Frank Adrian
(teklabs!tekcad!franka)
------------------------------
Date: 6 Oct 83 6:13:46-PDT (Thu)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!aplvax!eric @ Ucb-Vax
Subject: Re: Alas, I must flame...
Article-I.D.: aplvax.358
The "sensitivity" issue is not limited to government - most
companies also limit the distribution of information that they
consider "company private". I find very little wrong with the
idea of "we paid for it, we should benefit from it". The simple
truth is that they did underwrite the cost of the research. No one
is forced to work under these conditions, but if you want to take
the bucks, you have to realize that there are conditions attached
to them. On the whole, DoD has been amazingly open with the disclosure
of it CS research - one big example is ARPANET. True, they are now
wanting to split it up, but they are still leaving half of it to
research facilities who did not foot the bill for its development.
Perhaps it can be carried to extremes (I have never seen that happen,
but lets assume it that it can happen), they contracted for the work
to be done, and it is theirs to do with as they wish.
--
eric
...!seismo!umcp-cs!aplvax!eric
------------------------------
Date: 7 Oct 83 18:56:18-PDT (Fri)
From: npois!hogpc!houti!ariel!vax135!floyd!cmcl2!csd1!condict@Ucb-Vax
Subject: Re: the Halting problem.
Article-I.D.: csd1.124
[Very long article.]
Self-awareness is an illusion? I've heard this curious statement
before and never understood it. YOUR self-awareness may be an
illusion that is fooling me, and you may think that MY self-awareness
is an illusion, but one thing that you cannot deny (the very, only
thing that you know for sure) is that you, yourself, in there looking
out at the world through your eyeballs, are aware of yourself doing
that. At least you cannot deny it if it is true. The point is, I
know that I have self-awareness -- by the very act of experiencing
it. You cannot take this away from me by telling me that my
experience is an illusion. That is a patently ludicrous statement,
sillier even then when your mother (no offense -- okay, my mother,
then) used to tell you that the pain was all in your head. Of course
it is! That is exactly what the problem is!
Let me try to say this another way, since I have never been able to
get this across to someone who doesn't already believe it. There are
some statements that are true by definition, for instance, the
statement, "I pronounce you man and wife". The pronouncement happens
by the very saying of it and cannot be denied by anyone who has heard
it, although the legitimacy of the marriage can be questioned, of
course. The self-awareness thing is completely internal, so you may
sensibly question the statement "I have self-awareness" when it comes
from someone else. What you cannot rationally say is "Gee, I wonder
if I really am aware of being in this body and looking down at my
hands with these two eyes and making my fingers wiggle at will?" To
ask this ques- tion seriously of yourself is an indication that you
need immediate psychiatric help. Go directly to Bellvue and commit
yourself. It is as lunatic a question as asking yourself "Gee, am I
really feeling this pain or is it only an illusion that I hurt so bad
that I would happily throw myself in the trash masher to extinguish
it?"
For those of you who misunderstand what I mean by self-awareness,
here is the best I can do at an explanation. There is an obvious
sense in which my body is not me. You can cut off any piece of it
that leaves the rest functioning (alive and able to think) and the
piece that is cut off will not take part in any of my experiences,
while the rest of the body will still contain (be the center for?) my
self-awareness. You may think that this is just because my brain is
in the big piece. No, there is something more to it than that. With
a little imagination you can picture an android being constructed
someday that has an AI brain that can be programmed with all the
memories you have now and all the same mental faculties. Now picture
yourself observing the android and noting that it is an exact copy of
you. You can then imagine actually BEING that android, seeing what
it sees, feeling what it feels. What is the difference between
observing the android and being the android? It is just this -- in
the latter case your self-awareness is centered in the android, while
in the former it is not. That is what self-awareness, also called a
soul, is. It is the one true meaning of the word "I", which does not
refer to any particular collection of atoms, but rather to the "you"
that is occupying the body. This is not a religous issue either, so
back off, all you atheist and Christian fanatics. I'm just calling
it a soul because it is the real "me", and I can imagine it residing
in various different bodies and machines, although I would, of
course, prefer some to others.
This, then, is the reason I would never step into one of those
teleporters that functions by ripping apart your atoms, then
reconstructing an exact copy at a distant site. My self-awareness,
while it doesn't need a biological body to exist, needs something!
What guarantee do I have that "I", the "me" that sees and hears the
door of the transporter chamber clang shut, will actually be able to
find the new copy of my body when it is reconstructed three million
parsecs away. Some of you are laughing at my lack of modernism here,
but I can have the last laugh if you're stupid enough to get into the
teleporter with me at the controls. Suppose it functions like this
(from a real sci-fi story that I read): It scans your body, transmits
the copying information, then when it is certain that the copy got
through it zaps the old copy, to avoid the inconvenience of there
being two of you (a real mess at tax time!). Now this doesn't bother
you a bit since it all happens in micro-seconds and your
self-awareness, being an illusion, is not to be consulted in the
matter. But suppose I put your beliefs to the test by setting the
controls so that the copy is made but the original is not destroyed.
You get out of the teleporter at both ends, with the original you
thinking that something went wrong. I greet you with:
"Hi there! Don't worry, you got transported okay. Here, you can
talk to your copy on the telephone to make sure. The reason that I
didn't destroy this copy of you is because I thought you would enjoy
doing it yourself. Not many people get to commit suicide and still
be around to talk about it at cocktail parties, eh? Now, would you
like the hari-kari knife, the laser death ray, or the nice little red
pills?"
You, of course, would see no problem whatsoever with doing yourself
in on the spot, and would thank me for adding a little excitement to
your otherwise mundane trip. Right? What, you have a problem with
this scenario? Oh, it doesn't bother you if only one copy of you
exists at a time, but if there are ever two, by some error, your
spouse is stuck with both of you? What does the timing have to do
with your belief in self-awareness? Relativity theory says that the
order of the two events is indeterminate anyway.
People who won't admit the reality of their own self-awareness have
always bothered me. I'm not sure I want to go out for a beer with,
much less date or marry someone who doesn't at least claim to have
self-awareness (even if they're only faking). I get this image of me
riding in a car with this non-self-aware person, when suddenly, as we
reach a curve with a huge semi coming in the other direction, they
fail to move the wheel to stay in the right lane, not seeing any
particular reason to attempt to extend their own unimportant
existence. After all, if their awareness is just an illusion, the
implication is that they are really just a biological automaton and
it don't make no never mind what happens to it (or the one in the
next seat, for that matter, emitting the strange sounds and clutching
the dashboard).
The Big Unanswered Question then (which belongs in net.philosophy,
where I will expect to see the answer) is this:
"Why do I have self-awareness?"
By this I do not mean, why does my body emit sounds that your body
interprets to be statements that my body is making about itself. I
mean why am *I* here, and not just my body and brain? You can't tell
me that I'm not, because I have a better vantage point than you do,
being me and not you. I am the only one qualified to rule on the
issue, and I'll thank you to keep your opinion to yourself. This
doesn't alter the fact that I find my existence (that is, the
existence of my awareness, not my physical support system), to be
rather arbitrary. I feel that my body/brain combination could get
along just fine without it, and would not waste so much time reading
and writing windy news articles.
Enough of this, already, but I want to close by describing what
happened when I had this conversation with two good friends. They
were refusing to agree to any of it, and I was starting to get a
little suspicious. Only, half in jest, I tried explaining things
this way. I said:
"Look, I know I'm in here, I can see myself seeing and hear myself
hearing, but I'm willing to admit that maybe you two aren't really
self-aware. Maybe, in fact, you're robots, everybody is robots
except me. There really is no Cornell University, or U.S.A. for that
matter. It's all an elaborate production by some insidious showman
who constructs fake buildings and offices wherever I go and rips them
down behind me when I leave."
Whereupon a strange, unreadable look came over Dean's face, and he
called to someone I couldn't see, "Okay, jig's up! Cut! He figured it
out." (Hands motioning, now) "Get, those props out of here, tear down
those building fronts, ... "
Scared the pants off me.
Michael Condict ...!cmcl2!csd1!condict
New York U.
------------------------------
End of AIList Digest
********************
∂12-Oct-83 0022 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #35
Received: from SU-SCORE by SU-AI with TCP/SMTP; 12 Oct 83 00:22:33 PDT
Date: Tuesday, October 11, 1983 9:09PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #35
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Wednesday, 12 Oct 1983 Volume 1 : Issue 35
Today's Topics:
Implementations - Bagof & Predicates & Assert,
Help With Two Prolog Problems
----------------------------------------------------------------------
Date: Mon 10 Oct 83 21:17:17-PDT
From: SHardy@SRI-KL
Subject: Bagof
How does one decide what Bagof or Assert/Retract should do?
Recently, I read of a new implementation of Prolog. It had an
exciting new lazy evaluation mode. It could outperform DEC-10
Prolog. What is more, it had access to all sorts of good things
like screen editors and windows.
Unfortunately, its definition of Bagof was ``wrong'', that is
didn't agree with the definition of Bagof on the DEC-20.
Actually, this doesn't bother me since I think DEC-20 Prolog
has it wrong. As Richard says, it depends on what one thinks
should happen to calls like:
?- bagof(X, likes(X, Y), LIKERS).
Should LIKERS be the bag of all Xs that like anything or should
it be the bag of all Xs that like the same thing with failure
generating a new set?
The interpretation I prefer is the first; it should be the set
of all Xs who like anything.
I understand how others may disagree with my preference. I don't
understand how one could think one interpretation ``objectively''
right and the other wrong.
There is just a little Edinburgh imperialism underlying Richard's
messages !
-- Steve,
Teknowledge
PS: it is a mistake to have Assert/Retract modify the behaviour
of currently active procedure calls. That's why the Newpay
example is so hard in DEC-10 Prolog. The solution is to
change DEC-10 Prolog.
------------------------------
Date: Mon 10 Oct 83 20:50:02-PDT
From: SHardy@SRI-KL
Subject: Built In Predicates
I disagree with the proposal that built-in predicates should emulate
tables of assertions.
To summarize, this view states that if Prolog provides a predicate
such as SUCC then it should act as if defined by a table, thus:
succ(1, 2).
succ(2, 3).
succ(3, 4).
etc
Calls to succ such as:
succ(foo, X).
should, according to this view, simply fail.
Chances are that if my Prolog program ever executes the above
call, it's n error. The very last thing I want a programming
system to do when I've made a mistake is to execute some more
or less random non-local GOTO - which is what an unantipated
fail usually amounts to.
Crucially, we have to decide whether Prolog is a practical
programming language (and so subject to occasional compromises)
or a concept too pure to be sullied by practical considerations.
The ``principal'' by which implementors make decisions should
be ``what helps the user''.
-- Steve Hardy,
Teknowledge
------------------------------
Date: Mon 10 Oct 83 20:39:07-PDT
From: SHardy@SRI-KL
Subject: Use of Assert
A recent message on the use of Assert seemed to imply that it, and
Retract, shouldn;t be used because neither is well implemented on
the DEC-10 and both are, in fact, quite hard to implement.
In general, I think it a bad idea to object to using a feature of
a langauge because it is often badly impelemented.
If carried to an extreme, this view becomes silly. I once saw an
article ``refuting'' the claim that Prolog was inefficient by
saying that the claimant had not programmed ``idiomatically''
(I.e. had used Assert) and so, presumably, deserved all he got.
Although Assert is ``not very logical'', it can be extremely useful.
Without Assert one could not implement SetOf. Without SetOf all
kinds of things (such as making use of a closed world assumption)
are hard.
Crucially, Prolog now has several classes of user. Some are
concerned with its purity and logical roots; others are concerned
with getting fast performance out of Prolog on Von Neumann machines;
others are concerned with using Prolog to solve some problem.
Why should the last group be bothered by the concerns of the first
two?
-- Steve Hardy,
Teknowledge
------------------------------
Date: Mon, 10 Oct 83 19:34:56 PDT
From: Bijan Arbab <v.Bijan@UCLA-LOCUS>
Subject: Help With Two Prolog Problems
Please see if you have a solution, hint, reference, interest
or etc. in the following two problems I am posting. The
description of the first problem is rather long so please
drag along (I am sorry for that).
First Problem:
1. on planning and the 'frame problem' In the book 'Logic For
Problem Solving' on p.133 Kowalski writes:
"... The use of logic, in both the n-ary and binary representations,
runs into the 'frame problem': how to deal with the fact that almost
all statements which hold true of a given state continue to hold
after an action has been preformed. It has aften been assumed that
such facts cannot be expressed naturally in logic and cannot be used
efficiently.
The supposed inadequacies of logic have led to the development of
special system, such as Strips and Planner. We shall argue that an
equally satisfactory treatment of the frame problem can be obtained
in logic: by using terms to name statements and by using the frame
axiom, which describes the statement which continue to hold after
an action has been preformed, top-down rather than bottom-up."
He then continues to explain; on pp 135 he gives the following
program:
Initial state 0
(1) poss(0)
(2) holds(on(a,b),0)
(3) holds(on(b,p),0) A
(4) holds(on(c,r),0) B C
(5) holds(clear(a),0) -------------
(6) holds(clear(q),0) p q r
(7) holds(clear(c),0)
State-Independent Assertions
(8) manip(a)
(9) manip(b)
(10) manip(c)
Goal State
(11) <- holds(on(a,b),W), holds(on(b,c),W), holds(on(c,r),W), poss(W).
State Space and Preconditions
(12) poss(result(trans(X,Y,Z),W) <- poss(W), manip(X), diff(X,Z),
holds(clear(X),W), holds(clear(Z),W),
holds(on(X,Y),W).
Added Statements
(13) holds(on(X,Z), result(trans(X,Y,Z),W))
(14) holds(clear(Y), result(trans(X,Y,Z),W))
Frame Axiom and Deleted Statements
(15) holds(U,result(trans(X,Y<Z),W)) <- holds(U,W), diff(U, on(X,Y)),
diff(U, clear(Z))
Then he claims that a if the clauses are executed top-down and if the
subgoals are considered breadth-first and left to right in the order
they are written we will get the following trace of the program:
<-holds(on(a,b),W), holds(on(b,c),W), holds(on(c,r),W),
poss(W)
13 |
15 | W=result(trans(a,Y,b),W1)
15 |
12 |
<-holds(on(b,c),W1), holds(on(c,r),W1), poss(W1), manip(a),
holds(clear(a),W1), holds(clear(b),W1), holds(on(a,Y),W1),
diff(a,b)
13 |
15 |
12 |
8 | W1= result(trans(a,Y1,c),W2)
15 |
15 |
15 |
<-holds(on(c,r),W2), poss(W2), manip(b), holds(clear(b),W2),
holds(clear(c),W2), holds(on(b,Y1),W2), diff(b,c),
holds(clear(a),W2), holds(on(a,y),W2)
15 |
12 |
9 |
14 | W2= result(trans(a,b,Y),W3)
15 |
15 |
15 |
13 |
<-holds(on(c,r),W3), poos(W3), manip(a), holds(clear(a),W3),
holds(clear(y),W3), holds(on(a,b),W3), diff(a,y),
holds(clear(c),W3), holds(on(b,Y1,W3)
4|
1|
8|
5| W3=0 Y=q Y1=p
6|
2|
7|
3|
The Question is, how did he get the substitution for:
W2= result(trans(a,b,Y),W3)
by applying only top-down reasoning ?
I have reproduced his program in Prolog and am not able to get
the same trace. The problem is that at the point where he is
getting the substitution for W2 my program, which is the above
program typed in Prolog, will start to reason bottom-up and
therefore finds a solution that is longer.
Second Problem
2. on breath first search and Prolog
Can you write a function in Prolog for
is-all(A,Y,Q) where
A is a list of all Y's such that query Q is solved. E.G. if
the world is like
p(a)
p(b)p
p(c)
and the goal is
is-all(A,Y,p(Y))
then the list a.b.c.nil should be bound to A. Is there a pure
Prolog definition for is-all? By pure Prolog I mean the use
of addaxiom and delaxiom is not allowed !
Note: in most Prologs an equivelent function is already built
-in. But is not defined in Prolog.
Thank for all your help,
-- Bijan
------------------------------
End of PROLOG Digest
********************
∂12-Oct-83 1333 @SU-SCORE.ARPA:yao.pa@PARC-MAXC.ARPA Re: Next AFLB talk(s)
Received: from SU-SCORE by SU-AI with TCP/SMTP; 12 Oct 83 13:32:52 PDT
Received: from PARC-MAXC.ARPA by SU-SCORE.ARPA with TCP; Wed 12 Oct 83 13:23:18-PDT
Date: 12 Oct 83 13:22:09 PDT
From: yao.pa@PARC-MAXC.ARPA
Subject: Re: Next AFLB talk(s)
In-reply-to: "Broder@SU-SCORE.ARPA's message of Tue, 11 Oct 83 17:50:11
PDT"
To: Andrei Broder <Broder@SU-SCORE.ARPA>
cc: aflb.all@SU-SCORE.ARPA, sharon@SU-SCORE.ARPA
Andrei,
Please add my name to the mailing list of aflb. My EM address is Yao@score. Thanks.
Andy Yao
∂12-Oct-83 1827 LAWS@SRI-AI.ARPA AIList Digest V1 #75
Received: from SRI-AI by SU-AI with TCP/SMTP; 12 Oct 83 18:26:51 PDT
Date: Wednesday, October 12, 1983 10:41AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #75
To: AIList@SRI-AI
AIList Digest Thursday, 13 Oct 1983 Volume 1 : Issue 75
Today's Topics:
Music & AI - Poll Results,
Alert - September CACM,
Fuzzy Logic - Zadeh Syllogism,
Administrivia - Usenet Submissions & Seminar Notices,
Seminars - HP 10/13/83 & Rutgers Colloquium
----------------------------------------------------------------------
Date: 11 Oct 83 16:16:12 EDT (Tue)
From: Randy Trigg <randy%umcp-cs@UDel-Relay>
Subject: music poll results
Here are the results of my request for info on AI and music.
(I apologize for losing the header to the first mail below.)
- Randy
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Music in AI - find Art Wink formerly of U. of Pgh. Dept of info sci.
He had a real nice program to imitate Debuse (experts could not tell
its compositions from originals).
------------------------------
Date: 22 Sep 83 01:55-EST (Thu)
From: Michael Aramini <aramini@umass-cs>
Subject: RE: AI and music
At the AAAI conference, I was talking to someone from Atari (from Atari
Cambridge Labs, I think) who was doing work with AI and music. I can't
remember his name, however. He was working (with others) on automating
transforming music of one genre into another. This involved trying to
quasi-formally define what the characteristics of each genre of music are.
It sounded like they were doing a lot of work on defining ragtime and
converting ragtime to other genres. He said there were other people at Atari
that are working on modeling the emotional state various characteristics of
music evoke in the listener.
I am sorry that I don't have more info as to the names of these people or how
to get in touch with them. All that I know is that this work is being done
at Atari Labs either in Cambridge, MA or Palo Alto, CA.
------------------------------
Date: Thu 22 Sep 83 11:04:22-EDT
From: Ted Markowitz <TJM@COLUMBIA-20>
Subject: Music and AI
Cc: TJM@COLUMBIA-20
Having an undergrad degree in music and working toward a graduate
degree in CS, I'm very interested in any results you get from your
posting. I've been toying with the idea of working on a music-AI
interface, but haven't pinned down anything specific yet. What
is your research concerned with?
--ted
------------------------------
Date: 24 Sep 1983 20:27:57-PDT
From: Andy Cromarty <andy@aids-unix>
Subject: Music analysis/generation & AI
There are 3 places that immediately come to mind:
1. There is a huge and well-developed (indeed, venerable) computer
music group at Stanford. They currently occupy what used to be
the old AI Lab. I'm sure someone else will mention them, but if
not call Stanford (or send me another note and I'll find a net address
you can send mail to for details.)
2. Atari Research is doing a lot of this sort of work -- generation,
analysis, etc., both in Cambridge (Mass) and Sunnyvale (Calif.), I
believe.
3. Some very good work has come out of MIT in the past few years.
David Levitt is working on his PhD in this area there, having completed
his masters in AI approaches to Jazz improvisation, if my memory serves,
and I think William Paseman also wrote his masters on a related topic
there. Send mail to LEVITT@MIT-MC for info -- I'm sure he'd be happyy
to tell you more about his work.
asc
------------------------------
Date: Wed 12 Oct 83 09:40:48-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Alert - September CACM
The September CACM contains the following interesting items:
A clever cover graphically illustrating the U.S. and Japanese
approaches to the Fifth Generation.
A Harper and Row ad (without prices) including Touretzky's
LISP: A Gentle Introduction to Symbolic Computation and
Eisenstadt and O'Shea's Artificial Intelligence: Tools,
Techniques and Applications. [AIList would welcome reviews.]
An editorial by Peter J. Denning on the manifest destiny of
AI to succeed because the concept is easily grasped, credible,
expected to succeed, and seen as an improvement.
An introduction and three articles about the Fifth Generation,
Japanese management, the Japanese effort, and MCC.
A report on BELLE's slim victory in the 13th N.A. Computer Chess
Championship.
A note on the sublanguages (i.e., natural restricted languages)
conference at NYU next January.
A note on DOD's wholesale adoption of ADA.
-- Ken Laws
------------------------------
Date: Wed 12 Oct 83 09:24:34-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Zadeh Syllogism
Lotfi Zadeh used a syllogism yesterday that was new to me. To
paraphrase slightly:
Cheap apartments are rare and highly sought.
Rare and highly sought objects are expensive.
---------------------------------------------
Cheap apartments are expensive.
I suppose any reasonable system will conclude that cheap apartments
cannot exist, which may in fact be the case.
-- Ken Laws
------------------------------
Date: Wed 12 Oct 83 10:20:57-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Usenet Submissions
It has come to my attention that I may be failing to distribute
some Usenet-originated submissions back to Usenet readers. If
this is true, I apologize. I have not been simply ignoring
submissions; if you haven't heard from me, the item was distributed
to the Arpanet.
The problem involves the Article-I.D. field in Usenet-
originated messages. The gateway software (maintained by
Knutsen@SRI-UNIX) ignores digest items containing this keyword
so that messages originating from net.ai will not be posted
back to net.ai.
Unfortunately, messages sent directly to AIList instead of to
net.ai also contain this keyword. I have not been stripping it
out, and so the submission have not been making it back to Usenet.
I will try to be more careful in the future. Direct AIList
contributors who want to be sure I don't slip should begin
their submissions with a "strip ID field" comment. Even a
"Dear Moderator," might trigger my editing instincts. I hope
to handle direct submissions correctly even without prompting,
but the visible distinction between the two message types is
slight.
-- Ken Laws
------------------------------
Date: Wed 12 Oct 83 10:04:03-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Seminar Notices
There have been a couple of net.ai requests lately that seminar
notices be dropped, plus a strong request that they be
continued. I would like to make a clear policy statement
on this matter. Anyone who wishes to discuss it further
may write to AIList-Request@SRI-AI; I will attempt to
compile opinions or moderate the disscussion in a reasonable
manner.
Strictly speaking, AIList seldom prints "seminar notices".
Rather, it prints abstracts of AI-related talks. The abstract
is the primary item; the fact that the speaker is graduating
or out "selling" is secondary; and the possibility that AIList
readers might attend is tertiary. I try to distribute the
notices in a timely fashion, but responses to my original
query were two-to-one in favor of the abstracts even when the
talk had already been given.
The abstracts have been heavily weighted in favor of the
Bay Area; some readers have taken this to be provincialism.
Instead, it is simply the case that Stanford, Hewlett-Packard,
and occasionally SRI are the only sources available to me
that provide abstracts. Other sources would be welcome.
In the event that too many abstracts become available, I will
institute rigorous screening criteria. I do not feel the need
to do so at this time. I have passed up database, math, and CS
abstracts because they are outside the general AI and data
analysis domain of AIList; others might disagree. I have
included some borderline seminars because they were the first
of a series; I felt that the series itself was worth publicizing.
I can't please all of the people all of the time, but your feedback
is welcome to help me keep on course. At present, I regard the
abstracts to be one of AIList's strengths.
-- Ken Laws
------------------------------
Date: 11 Oct 83 16:30:27 PDT (Tuesday)
From: Kluger.PA@PARC-MAXC.ARPA
Reply-to: Kluger.PA@PARC-MAXC.ARPA
Subject: HP Computer Colloquium 10/13/83
Piero P. Bonissone
Corporate Research and Development
General Electric Corporation
DELTA: An Expert System for Troubleshooting
Diesel Electric Locomotives
The a priori information available to the repair crew is a list of
"symptoms" reported by the engine crew. More information can be
gathered in the "running repair" shop, by taking measurements and
performing tests provided that the two hour time limit is not exceeded.
A rule based expert system, DELTA (Diesel Electric Locomotive
Troubleshooting Aid) has been developed at the General Electric
Corporate Research and Development Laboratories to guide in the repair
of partially disabled electric locomotives. The system enforces a
disciplined troubleshooting procedure which minimizes the cost and time
of the corrective maintenance allowing detection and repair of
malfunctions in the two hour window allotted to the service personnel in
charge of those tasks.
A prototype system has been implemented in FORTH, running on a Digital
Equipment VAX 11/780 under VMS, on a PDP 11/70 under RSX-11M, and on a
PDP 11/23 under RSX-11M. This system contains approximately 550 rules,
partially representing the knowledge of a Senior Field Service Engineer.
The system is provided with graphical/video capabilities which can help
the user in locating and identifying locomotive components, as well as
illustrating repair procedures.
Although the system only contains a limited number of rules (550), it
covers, in a shallow manner, a wide breadth of the problem space. The
number of rules will soon be raised to approximately 1200 to cover, with
increased depth, a larger portion of the problem space.
Thursday, October 13, 1983 4:00 PM
Hewlett Packard
Stanford Division Labs
5M Conference room
1501 Page Mill Rd
Palo Alto, CA 9430
** Be sure to arrive at the building's lobby ON TIME, so that you may
be escorted to the meeting room.
------------------------------
Date: 11 Oct 83 13:47:44 EDT
From: LOUNGO@RUTGERS.ARPA
Subject: colloquium
[Reprinted from the RUTGERS bboard. Long message.]
Computer Science Faculty Research Colloquia
Date: Thursday, October 13, 1983
Time: 2:00-4:15
Place: Room 705, Hill Center, Busch Campus
Schedule:
2:00-2:15 Prof. Saul Amarel, Chairman, Department of Computer Science
Introductory Remarks
2:15-2:30 Prof. Casimir Kulikowski
Title: Expert Systems and their Applications
Area(s): Artificial intelligence
2:30-2:45 Prof. Natesa Sridharan
Title: TAXMAN
Area(s): Artificial intelligence (knowledge representation),
legal reasoning
2:45-3:00 Prof. Natesa Sridharan
Title: Artificial Intelligence and Parallelism
Area(s): Artificial intelligence, parallelism
3:00-3:15 Prof. Saul Amarel
Title: Problem Reformulations and Expertise Acquisition;
Theory Formation
Area(s): Artificial intelligence
3:15-3:30 Prof. Michael Grigoriadis
Title: Large Scale Mathematical Programming;
Network Optimization; Design of Computer Networks
Area(s): Computer networks
3:30-3:45 Prof. Robert Vichnevetsky
Title: Numerical Solutions of Hyperbolic Equations
Area(s): Numerical analysis
3:45-4:00 Prof. Martin Dowd
Title: P~=NP
Area(s): Computational complexity
4:00-4:15 Prof. Ann Yasuhara
Title: Notions of Complexity for Trees, DAGS,
*
and subsets of {0,1}
Area(s): Computational complexity
COFFEE AND DONUTS AT 1:30
-------
Mail-From: LAWS created at 12-Oct-83 09:11:56
Mail-From: LOUNGO created at 11-Oct-83 13:48:35
Date: 11 Oct 83 13:48:35 EDT
From: LOUNGO@RUTGERS.ARPA
Subject: colloquium
To: BBOARD@RUTGERS.ARPA
cc: pettY@RUTGERS.ARPA, lounGO@RUTGERS.ARPA
ReSent-date: Wed 12 Oct 83 09:11:56-PDT
ReSent-from: Ken Laws <Laws@SRI-AI.ARPA>
ReSent-to: ailist@SRI-AI.ARPA
Computer Science Faculty Research Colloquia
Date: Friday, October 14, 1983
Time: 2:00-4:15
Place: Room 705, Hill Center, Busch Campus
Schedule:
2:00-2:15 Prof. Tom Mitchell
Title: Machine Learning and Artificial Intelligence
Area(s): Artificial intelligence
2:15-2:30 Prof. Louis Steinberg
Title: An Artificial Intelligence Approach to Computer-Aided
Design for VLSI
Area(s): Artificial intelligence, computer-aided design, VLSI
2:30-2:45 Prof. Donald Smith
Title: Debugging VLSI Designs
Area(s): Artificial intelligence, computer-aided design, VLSI
2:45-3:00 Prof. Apostolos Gerasoulis
Title: Numerical Solutions to Integral Equations
Area(s): Numerical analysis
3:00-3:15 Prof. Alexander Borgida
Title: Applications of AI to Information Systems Development
Area(s): Artificial intelligence, databases,
software engineering
3:15-3:30 Prof. Naftaly Minsky
Title: Programming Environments for Evolving Systems
Area(s): Software engineeging, databases, artificial
intelligence
3:30-3:45 Prof. William Steiger
title: Random Algorithms
area(s): Analysis of algorithms, numerical methods,
non-numerical methods
3:45-4:00
4:00-4:15
!
Computer Science Faculty Research Colloquia
Date: Thursday, October 20, 1983
Time: 2:00-4:15
Place: Room 705, Hill Center, Busch Campus
Schedule:
2:00-2:15 Prof. Thomaz Imielinski
Title: Relational Databases and AI; Logic Programming
Area(s): Dabtabases, artificial intelligence
2:15-2:30 Prof. David Rozenshtein
Title: Nice Relational Databases
Area(s): Databases, data models
2:30-2:45 Prof. Chitoor Srinivasan
Title: Expert Systems that Reason About Action with Time
Area(s): Artificial intelligence, knowledge-based systems
2:45-3:00 Prof. Gerald Richter
Title: Numerical Solutions to Partial Differential Equations
Area(s): Numerical analysis
3:00-3:15 Prof. Irving Rabinowitz
Title: - To be announced -
Area(s): Programming languages
3:15-3:30 Prof. Saul Levy
Title: Distributed Computing
Area(s): Computing, computer architecture
3:30-3:45 Prof. Yehoshua Perl
Title: Sorting Networks, Probabilistic Parallel Algorithms,
String Matching
Area(s): Design and analysis of algorithms
3:45-4:00 Prof. Marvin Paull
Title: Algorithm Design
Area(s): Design and analysis of algorithms
4:00-4:15 Prof. Barbara Ryder
Title: Incremental Data Flow Analysis
Area(s): Design and analysis of algorithms,
compiler optimization
COFFEE AND DONUTS AT 1:30
------------------------------
End of AIList Digest
********************
∂13-Oct-83 0828 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #36
Received: from SU-SCORE by SU-AI with TCP/SMTP; 13 Oct 83 08:28:31 PDT
Date: Wednesday, October 12, 1983 12:10PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #36
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Thursday, 13 Oct 1983 Volume 1 : Issue 36
Today's Topics:
Implementations - On Abolishing Retract
& On Replacing Assert and Retract
----------------------------------------------------------------------
Date: Sunday, 9-Oct-83 23:07:10-BST
From: Richard HPS (on ERCC DEC-10) <OKeefe.R.A.@EDXA>
Subject: More On Abolishing 'retract'
There doesn't seem to be much we can do about assert and retract
for data-base work except tidy it up a bit. However, people often
use the data-base as working storage when they haven't the least
intention of retaining the information after the top-level command
has finished. There was a puzzle solution in this Digest not very
long ago that seemed to be of this sort; I say "seemed" because I
can't do puzzles and sour-grapes pretend that I'm not interested,
so I didn't read it at all carefully.
There are several problems with using the data base to simulate
assignment. An earlier note pointed out the baneful effect on pure
code the results from the interpreter being prepared to cope with
data base hacking. But also
- assert and retract can be very very slow. On the DEC-10
assert(p(1,2,3,4,5,6)) takes about 1 or 2 milliseconds. In
Prolog-X assert = compile, and takes quite a bit longer.
- retracted clauses can't be reclaimed until the clause that
called retracted is failed over, so the data base tends to
clog up pretty fast.
- it makes life very difficult indeed for program analysis
tools (cross-referencers and module systems in particular).
- for future reference: data base hacking doesn't mix with
parallel processing; you need complicated synchronisation
declarations, and you might as well use ADA [*] to start with.
[*] ADA is a trademark of the DOD UFO or something like that.
So, when we have some large data structure such as a graph
or a symbol table, is there a way we can handle it in Prolog
without using the data base?
Too right there is ! Ever heard of trees ? Let's take the
case of a symbol table. This is basically a map from Keys
to Data (Data is the plural of Datum), and is very easily
represented as a balanced binary tree where each node has
5 fields:
the Key, the Datum, the Balance (-1,0,+1),
the Left subtree and the Right subtree.
Looking up an element takes O(lg N) time, inserting and
deleting an element both take O(lg N) time and new space.
Building a symbol table with N elements turns over O(N lg N)
space, but keeps only O(N).
<Provided you have a garbage collector for the global/copy
stack> and <Provided you have TRO> passing around an explicit
tree and searching it in Prolog actually takes LESS space than
using the data base, and for sufficiently large N can be
expected to take less time as well.
We can represent a graph as a set of edges held in some sort of
tree as easily as we can represent it as a relation in the data
base. In fact more easily, as it takes no effort at all to
carry around any number of structures, or to copy one, when
they are just terms. If we try to keep several graphs in the
data base by using triples
arc(GraphName, FromNode, ToNode)
we give away any benefit that simple indexing may have been
giving us for a single graph.
Another advantage of using terms is that changes to a data
structure {read, modified copies of a data structure} and
undone {read, just forgotten} on backtracking. Something
that often confuses beginners is that assert(...) isn't
undone by failure. And of course they pose extra problem
to parallelism.
The one big disadvantage of passing around explicit data
structures is that if p calls q and q wants to use a structure,
not only does it have to be an argument of q, but p has to know
about it as well. This can lead to long argument lists, where
the order of the arguments doesn't mean anything, and it can be
a confounded nuisance when you realise that a new predicate has
to know about a particular structure, and suddenly a dozen or
so predicates that call it directly or indirectly have to have
this structure added to their argument lists. A partial answer
to this is to package several such things into one structure,
and let each predicate choose which it looks at. Passing
around an explicit state vector with a lot of components is
only marginally better than data base hacking (although it is
much more efficient). Applicative languages get around this
as far as <access> is concerned by having nested environments,
though they have precisely the same problem with <update>.
Prolog, of course, doesn't nest.
By the way, in a Prolog which has subgoal←of, you do have
global variables. Suppose you have a predicate p and you
want to make variables X, Y, Z available. Then you can write
p(....) :-
...
p←export(X,Y,Z,...).
p←export(X,Y,Z,...) :-
...rest of p...
true. % block TRO
and to access say Y, you write
... subgoal←of(p←export(←,MyY,←,...)) ...
This doesn't take us quite as far from logic as data base hacking
does, because we could always add the ancestor list as an explicit
argument of every predicate. I have included it here not as a
coding trick to add to your tool-bag, but to stop someone
suggesting it as a possible solution to the "global variable"
problem. There HAS to be a better way of avoiding long argument
lists that THAT.
A VITAL thing in a Prolog implementation is a garbage collector.
I'm not talking about a garbage collector for the heap. You can
find any number of Prolog implementors who <claim> to have a
garbage collector when all they mean is that they eventually get
around to recovering space explicitly freed by the programmer with
retract. I'm talking about a garbage collector for the global/copy
stack. David Warren's report on DEC-10 Prolog does describe the
DEC-10 garbage collector. {I have never understood it. One day
I hope to.} Maurice Bruynooghe has published a number of papers
on Prolog garbage collector. {Oh yes, it isn't just the stack that
needs garbage collecting. Trail entries can become garbage too.}
There are quite a few garbage collection algorithms that could be
adapted to a copying system quite readily.
TRO is quite a handy thing to have, but without a garbage
collector all it means is that programs which used to run out of
local stack run a bit longer and then run out of global/copy stack.
TRO also helps garbage collection, because it reduces the number
of local frames, which may let GC reclaim some space sooner. But
we can live without TRO easier than we can live without GC.
A lot of ugly Prolog practice can be attributed to programming
around interpreters that lack these features. For example,
something I have seen recommended is
p(...) :-
p←body(...),
assert('$result'(...)),
fail.
p(...) :-
retract('$result'(...)).
which saves the result, throws away the local stack (which could
have been done by a cut) and global stack and trail (which is
better done by a garbage collector), and then picks up the result
and keeps going. Now this is apallingly ugly, but if you haven't
got GC you may have to do it.
Could those Prolog implementers who have access to the net and
whose Prolog compilers or interpeters have GC please send a
message to this Digest saying so ?
Dec-10 Prolog has, of course.
PDP-11 Prolog has not.
C-Prolog has not.
I believe that Micro-Prolog has.
Poplog has.
Prologs embedded in Lisp generally have.
MU-Prolog has not.
IF-Prolog has not.
Prolog-X has not, but it definitely will have.
Before the Prolog-in-Lisp people start patting themselves on the
back, I'd like to point out that because Lisp doesn't know about
Prolog, the Lisp garbage collector may have on to things that a
Prolog-specific garbage collector could discard. This is no
reflection on Lisp. You can after all embed other things than
Prolog in it. {have on to => hang on to, sorry.} Poplog has,
or had, the same problem. I believe Sussex are doing or have
done some work on the Pop-11 garbage collector to tell it about
Prolog. This may be possible in some Lisp systems as well.
------------------------------
Date: Tuesday, 11-Oct-83 20:03:43-BST
From: Richard HPS (on ERCC DEC-10) <OKeefe.R.A.@EDXA>
Subject: More on Replacing Assert and Retract
If you've been following this Digest, you'll know that I've been
knocking assert and retract. I've also been looking for ways of
doing without them, because criticising is easy and construction
is hard.
assert and retract are sometimes used to implement "contexts" or
"alternative worlds". The is, the program is currently working
in a theory T (in Prolog, the program IS the theory T), and we
want to prove the goal G in a modified theory T', where T' is
sufficiently close to T that we can obtain it by adding a few
axioms to and deleting a few axioms from T. In a fully-fledged
logic programming language, we would just prove the implication,
E.g. if T' = T U {A1,...,An} we would try to prove the goal
(A1 & ... & An) => G. LPL actually lets you do that. Prolog
doesn't, so in Prolog we have to change the program so that it
IS T', try to prove G, and change the program back to T after
G succeeds or fails. (G may succeed more than once !).
So I considered packaging this up into a single operation. The
operation is
assuming(ListOfDifferences, GoalToProve)
and its meaning is supposed to be
Program |- assuming(D, G) :-
apply←changes(D, Program, Modified),
Modified |- call(G).
In the case when all the changes are additions, its meaning is
supposed to be
Program |- assuming(Additions, G) :-
Program |- Additions => G.
This operation has a lot of nice properties. The main one, of
course, is that it is so close to logic. Another one is that the
data base changes are local to the call of G. This has three
effects. One is that as the data base changes are synchronised
with the call, other calls need not pay any extra price (as they
do for assert and retract). Another is that we can reclaim the
space of the changes when the goal succeeds determinately, as
well as when it fails. By putting suitable information on the
trail, we can arrange for this to happen even if the cut which
forces the determinism occurs outide the call to assuming. The
third is that it is vaguely plausible that this operation might
not interfere too much with parallelism. Might.
There is a problem with removing axioms. assert(p(X)) does make
p(X) true in T', but retract(p(X)) does not make (all X)not p(X)
true. It just removes the first axiom which could be used to
prove p(X), leaving any others. It will find and remove the
others on backtracking but if we backtrack we can't remember the
identity of the retracted clauses to put them back. A method
which does work is to assert a clause p(X) :- !, fail. However,
that has problems too. If we have
p(a).
p(b).
and want to conceal p(a), putting
p(a) :- !, fail.
at the front not only fails the call p(a), it also fails the
call p(X) which should succeed and bind X to b. Putting the
new clause at the end does nothing at all. The only reliable
method is to give each clause a name, and to deny axioms by
name. (DEC-10 Prolog and C-Prolog have such names. They are
called data-base references and are more trouble than they are
worth.) This looks like an argument for retract again, but
consider denying p(X, a) when the data base contains
p(b, Y).
retract(p(X, a)) will retract that clause, binding X=b, Y=a.
But that is not what we mean by denying p(X,a), we would like
p(b,c) to remain true.
The problems with denial would have made me discard denial entirely.
However, I only noticed them after I had found another problem with
the code below. I leave it as an exercise for the reader to
discover what is wrong with this definition. I would like to point
out, though, that assuming(ListOfAdditionsOnly, Goal) *can* be
implemented efficiently and correctly by working at a lower level
than assert and retract. For example, I know what to do in C Prolog.
As this code is my third attempt at defining the operation in
Prolog, I am now convinced that it cannot be done. Don't bother
trying to prove me wrong until you have found the bug in this
version. What is more important is this:
this operation could be provided in a Prolog system;
it stays closer to logic than assert and retract do;
BUT would people find it useful?
I could certainly find uses for it. However, my programs don't do
much hypothetical reasoning, so it would not replace many of my
asserts and retracts. The problems with this method of hypothetical
reasoning are well known: results derived in one branch but which
happen to be independent of the new hypotheses are not available in
other branches. But then Prolog doesn't store lemmas anyway: every
time you call p(a,b) it is computed afresh. {Given that assert and
retract may have change the program since the last time, it has to
compute everything afresh, and we wouldn't want read(←) lemmatised.}
Anyway, here's the code, for what it is worth. (About 3p.)
:- type
delta --> +void | -void.
:- pred
assuming(list(delta), void),
make←assumed←changes(list(delta)),
undo←assumed←changes(list(delta)).
:- public
assuming/2.
:- mode
assuming(+, +),
make←assumed←changes(+),
undo←assumed←changes(+).
assuming(Changes, Goal) :-
make←assumed←changes(Changes), % CALL Goal
( call(Goal),
( undo←assumed←changes(Changes) % EXIT Goal
| make←assumed←changes(Changes),
fail % REDO Goal
)
vH | undo←assumed←changes(Changes), % FAIL Goal
fail
).
make←assumed←changes([+Clause|Changes]) :- !,
asserta(Clause),
make←assumed←changes(Changes).
make←assumed←changes([-Denial|Changes]) :- !,
asserta(( Denial :- !, fail )),
make←assumed←changes(Changes).
make←assumed←changes([]).
undo←assumed←changes([+Clause|Changes]) :-
undo←assumed←changes(Changes),
retract(Clause), !.
undo←assumed←changes([-Denial|Changes]) :-
undo←assumed←changes(Changes),
retract(( Denial :- !, fail )), !.
undo←assumed←changes([]).
------------------------------
End of PROLOG Digest
********************
∂13-Oct-83 0902 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #36
Received: from SU-SCORE by SU-AI with TCP/SMTP; 13 Oct 83 09:02:16 PDT
Date: Wednesday, October 12, 1983 12:10PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #36
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Thursday, 13 Oct 1983 Volume 1 : Issue 36
Today's Topics:
Implementations - On Abolishing Retract
& On Replacing Assert and Retract
----------------------------------------------------------------------
Date: Sunday, 9-Oct-83 23:07:10-BST
From: Richard HPS (on ERCC DEC-10) <OKeefe.R.A.@EDXA>
Subject: More On Abolishing 'retract'
There doesn't seem to be much we can do about assert and retract
for data-base work except tidy it up a bit. However, people often
use the data-base as working storage when they haven't the least
intention of retaining the information after the top-level command
has finished. There was a puzzle solution in this Digest not very
long ago that seemed to be of this sort; I say "seemed" because I
can't do puzzles and sour-grapes pretend that I'm not interested,
so I didn't read it at all carefully.
There are several problems with using the data base to simulate
assignment. An earlier note pointed out the baneful effect on pure
code the results from the interpreter being prepared to cope with
data base hacking. But also
- assert and retract can be very very slow. On the DEC-10
assert(p(1,2,3,4,5,6)) takes about 1 or 2 milliseconds. In
Prolog-X assert = compile, and takes quite a bit longer.
- retracted clauses can't be reclaimed until the clause that
called retracted is failed over, so the data base tends to
clog up pretty fast.
- it makes life very difficult indeed for program analysis
tools (cross-referencers and module systems in particular).
- for future reference: data base hacking doesn't mix with
parallel processing; you need complicated synchronisation
declarations, and you might as well use ADA [*] to start with.
[*] ADA is a trademark of the DOD UFO or something like that.
So, when we have some large data structure such as a graph
or a symbol table, is there a way we can handle it in Prolog
without using the data base?
Too right there is ! Ever heard of trees ? Let's take the
case of a symbol table. This is basically a map from Keys
to Data (Data is the plural of Datum), and is very easily
represented as a balanced binary tree where each node has
5 fields:
the Key, the Datum, the Balance (-1,0,+1),
the Left subtree and the Right subtree.
Looking up an element takes O(lg N) time, inserting and
deleting an element both take O(lg N) time and new space.
Building a symbol table with N elements turns over O(N lg N)
space, but keeps only O(N).
<Provided you have a garbage collector for the global/copy
stack> and <Provided you have TRO> passing around an explicit
tree and searching it in Prolog actually takes LESS space than
using the data base, and for sufficiently large N can be
expected to take less time as well.
We can represent a graph as a set of edges held in some sort of
tree as easily as we can represent it as a relation in the data
base. In fact more easily, as it takes no effort at all to
carry around any number of structures, or to copy one, when
they are just terms. If we try to keep several graphs in the
data base by using triples
arc(GraphName, FromNode, ToNode)
we give away any benefit that simple indexing may have been
giving us for a single graph.
Another advantage of using terms is that changes to a data
structure {read, modified copies of a data structure} and
undone {read, just forgotten} on backtracking. Something
that often confuses beginners is that assert(...) isn't
undone by failure. And of course they pose extra problem
to parallelism.
The one big disadvantage of passing around explicit data
structures is that if p calls q and q wants to use a structure,
not only does it have to be an argument of q, but p has to know
about it as well. This can lead to long argument lists, where
the order of the arguments doesn't mean anything, and it can be
a confounded nuisance when you realise that a new predicate has
to know about a particular structure, and suddenly a dozen or
so predicates that call it directly or indirectly have to have
this structure added to their argument lists. A partial answer
to this is to package several such things into one structure,
and let each predicate choose which it looks at. Passing
around an explicit state vector with a lot of components is
only marginally better than data base hacking (although it is
much more efficient). Applicative languages get around this
as far as <access> is concerned by having nested environments,
though they have precisely the same problem with <update>.
Prolog, of course, doesn't nest.
By the way, in a Prolog which has subgoal←of, you do have
global variables. Suppose you have a predicate p and you
want to make variables X, Y, Z available. Then you can write
p(....) :-
...
p←export(X,Y,Z,...).
p←export(X,Y,Z,...) :-
...rest of p...
true. % block TRO
and to access say Y, you write
... subgoal←of(p←export(←,MyY,←,...)) ...
This doesn't take us quite as far from logic as data base hacking
does, because we could always add the ancestor list as an explicit
argument of every predicate. I have included it here not as a
coding trick to add to your tool-bag, but to stop someone
suggesting it as a possible solution to the "global variable"
problem. There HAS to be a better way of avoiding long argument
lists that THAT.
A VITAL thing in a Prolog implementation is a garbage collector.
I'm not talking about a garbage collector for the heap. You can
find any number of Prolog implementors who <claim> to have a
garbage collector when all they mean is that they eventually get
around to recovering space explicitly freed by the programmer with
retract. I'm talking about a garbage collector for the global/copy
stack. David Warren's report on DEC-10 Prolog does describe the
DEC-10 garbage collector. {I have never understood it. One day
I hope to.} Maurice Bruynooghe has published a number of papers
on Prolog garbage collector. {Oh yes, it isn't just the stack that
needs garbage collecting. Trail entries can become garbage too.}
There are quite a few garbage collection algorithms that could be
adapted to a copying system quite readily.
TRO is quite a handy thing to have, but without a garbage
collector all it means is that programs which used to run out of
local stack run a bit longer and then run out of global/copy stack.
TRO also helps garbage collection, because it reduces the number
of local frames, which may let GC reclaim some space sooner. But
we can live without TRO easier than we can live without GC.
A lot of ugly Prolog practice can be attributed to programming
around interpreters that lack these features. For example,
something I have seen recommended is
p(...) :-
p←body(...),
assert('$result'(...)),
fail.
p(...) :-
retract('$result'(...)).
which saves the result, throws away the local stack (which could
have been done by a cut) and global stack and trail (which is
better done by a garbage collector), and then picks up the result
and keeps going. Now this is apallingly ugly, but if you haven't
got GC you may have to do it.
Could those Prolog implementers who have access to the net and
whose Prolog compilers or interpeters have GC please send a
message to this Digest saying so ?
Dec-10 Prolog has, of course.
PDP-11 Prolog has not.
C-Prolog has not.
I believe that Micro-Prolog has.
Poplog has.
Prologs embedded in Lisp generally have.
MU-Prolog has not.
IF-Prolog has not.
Prolog-X has not, but it definitely will have.
Before the Prolog-in-Lisp people start patting themselves on the
back, I'd like to point out that because Lisp doesn't know about
Prolog, the Lisp garbage collector may have on to things that a
Prolog-specific garbage collector could discard. This is no
reflection on Lisp. You can after all embed other things than
Prolog in it. {have on to => hang on to, sorry.} Poplog has,
or had, the same problem. I believe Sussex are doing or have
done some work on the Pop-11 garbage collector to tell it about
Prolog. This may be possible in some Lisp systems as well.
------------------------------
Date: Tuesday, 11-Oct-83 20:03:43-BST
From: Richard HPS (on ERCC DEC-10) <OKeefe.R.A.@EDXA>
Subject: More on Replacing Assert and Retract
If you've been following this Digest, you'll know that I've been
knocking assert and retract. I've also been looking for ways of
doing without them, because criticising is easy and construction
is hard.
assert and retract are sometimes used to implement "contexts" or
"alternative worlds". The is, the program is currently working
in a theory T (in Prolog, the program IS the theory T), and we
want to prove the goal G in a modified theory T', where T' is
sufficiently close to T that we can obtain it by adding a few
axioms to and deleting a few axioms from T. In a fully-fledged
logic programming language, we would just prove the implication,
E.g. if T' = T U {A1,...,An} we would try to prove the goal
(A1 & ... & An) => G. LPL actually lets you do that. Prolog
doesn't, so in Prolog we have to change the program so that it
IS T', try to prove G, and change the program back to T after
G succeeds or fails. (G may succeed more than once !).
So I considered packaging this up into a single operation. The
operation is
assuming(ListOfDifferences, GoalToProve)
and its meaning is supposed to be
Program |- assuming(D, G) :-
apply←changes(D, Program, Modified),
Modified |- call(G).
In the case when all the changes are additions, its meaning is
supposed to be
Program |- assuming(Additions, G) :-
Program |- Additions => G.
This operation has a lot of nice properties. The main one, of
course, is that it is so close to logic. Another one is that the
data base changes are local to the call of G. This has three
effects. One is that as the data base changes are synchronised
with the call, other calls need not pay any extra price (as they
do for assert and retract). Another is that we can reclaim the
space of the changes when the goal succeeds determinately, as
well as when it fails. By putting suitable information on the
trail, we can arrange for this to happen even if the cut which
forces the determinism occurs outide the call to assuming. The
third is that it is vaguely plausible that this operation might
not interfere too much with parallelism. Might.
There is a problem with removing axioms. assert(p(X)) does make
p(X) true in T', but retract(p(X)) does not make (all X)not p(X)
true. It just removes the first axiom which could be used to
prove p(X), leaving any others. It will find and remove the
others on backtracking but if we backtrack we can't remember the
identity of the retracted clauses to put them back. A method
which does work is to assert a clause p(X) :- !, fail. However,
that has problems too. If we have
p(a).
p(b).
and want to conceal p(a), putting
p(a) :- !, fail.
at the front not only fails the call p(a), it also fails the
call p(X) which should succeed and bind X to b. Putting the
new clause at the end does nothing at all. The only reliable
method is to give each clause a name, and to deny axioms by
name. (DEC-10 Prolog and C-Prolog have such names. They are
called data-base references and are more trouble than they are
worth.) This looks like an argument for retract again, but
consider denying p(X, a) when the data base contains
p(b, Y).
retract(p(X, a)) will retract that clause, binding X=b, Y=a.
But that is not what we mean by denying p(X,a), we would like
p(b,c) to remain true.
The problems with denial would have made me discard denial entirely.
However, I only noticed them after I had found another problem with
the code below. I leave it as an exercise for the reader to
discover what is wrong with this definition. I would like to point
out, though, that assuming(ListOfAdditionsOnly, Goal) *can* be
implemented efficiently and correctly by working at a lower level
than assert and retract. For example, I know what to do in C Prolog.
As this code is my third attempt at defining the operation in
Prolog, I am now convinced that it cannot be done. Don't bother
trying to prove me wrong until you have found the bug in this
version. What is more important is this:
this operation could be provided in a Prolog system;
it stays closer to logic than assert and retract do;
BUT would people find it useful?
I could certainly find uses for it. However, my programs don't do
much hypothetical reasoning, so it would not replace many of my
asserts and retracts. The problems with this method of hypothetical
reasoning are well known: results derived in one branch but which
happen to be independent of the new hypotheses are not available in
other branches. But then Prolog doesn't store lemmas anyway: every
time you call p(a,b) it is computed afresh. {Given that assert and
retract may have change the program since the last time, it has to
compute everything afresh, and we wouldn't want read(←) lemmatised.}
Anyway, here's the code, for what it is worth. (About 3p.)
:- type
delta --> +void | -void.
:- pred
assuming(list(delta), void),
make←assumed←changes(list(delta)),
undo←assumed←changes(list(delta)).
:- public
assuming/2.
:- mode
assuming(+, +),
make←assumed←changes(+),
undo←assumed←changes(+).
assuming(Changes, Goal) :-
make←assumed←changes(Changes), % CALL Goal
( call(Goal),
( undo←assumed←changes(Changes) % EXIT Goal
| make←assumed←changes(Changes),
fail % REDO Goal
)
vH | undo←assumed←changes(Changes), % FAIL Goal
fail
).
make←assumed←changes([+Clause|Changes]) :- !,
asserta(Clause),
make←assumed←changes(Changes).
make←assumed←changes([-Denial|Changes]) :- !,
asserta(( Denial :- !, fail )),
make←assumed←changes(Changes).
make←assumed←changes([]).
undo←assumed←changes([+Clause|Changes]) :-
undo←assumed←changes(Changes),
retract(Clause), !.
undo←assumed←changes([-Denial|Changes]) :-
undo←assumed←changes(Changes),
retract(( Denial :- !, fail )), !.
undo←assumed←changes([]).
------------------------------
End of PROLOG Digest
********************
∂13-Oct-83 1439 GOLUB@SU-SCORE.ARPA teaching obligations
Received: from SU-SCORE by SU-AI with TCP/SMTP; 13 Oct 83 14:38:59 PDT
Date: Thu 13 Oct 83 14:38:52-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: teaching obligations
To: faculty@SU-SCORE.ARPA
cc: patashniK@SU-SCORE.ARPA, YM@SU-AI.ARPA, berglund@SU-SCORE.ARPA
A number of our faculty have been upset by the departmental teaching
requirement which is INDEPENDENT OF THE STUDENT ENROLLMENT. For instance,
if a course has ten students one gets the same credit as if one teaches
a course for 100 students. On the other hand it isn't ten times harder to
teach a 100 student class as a ten person class. (What should the factor be?)
The School of Engineering does weigh the teaching requirement by the
number of students. Should we adapt a similiar policy? Does anyone
volunteer to serve on a committee to formulate a policy or chair it?
GENE
-------
∂13-Oct-83 1804 LAWS@SRI-AI.ARPA AIList Digest V1 #76
Received: from SRI-AI by SU-AI with TCP/SMTP; 13 Oct 83 18:04:03 PDT
Date: Thursday, October 13, 1983 10:13AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #76
To: AIList@SRI-AI
AIList Digest Thursday, 13 Oct 1983 Volume 1 : Issue 76
Today's Topics:
Intelligent Front Ends - Request,
Finance - IntelliGenetics,
Fuzzy Logic - Zadeh's Paradox,
Publication - Government Reviews
----------------------------------------------------------------------
Date: Thursday, 13-Oct-83 12:04:24-BST
From: BUNDY HPS (on ERCC DEC-10) <bundy@edxa>
Reply-to: bundy@rutgers.arpa
Subject: Request for Information on Intelligent Front Ends
The UK government has set up a the Alvey Programme as the UK
answer to the Japanese 5th Generation Programme. One part of that
Programme has been to identify and promote research in a number of
'themes'. I am the manager of one such theme - on 'Intelligent Front
Ends' (IFE). An IFE is defined as follows:
"A front end to an existing software package, for example a finite
element package, a mathematical modelling system, which provides a
user-friendly interface (a "human window") to packages which without
it, are too complex and/or technically incomprehensible to be
accessible to many potential users. An intelligent front end builds a
model of the user's problem through user-oriented dialogue mechanisms
based on menus or quasi-natural language, which is then used to
generate suitably coded instructions for the package."
One of the theme activities is to gather information about
IFEs, for instance: useful references and short descriptions of
available tools. If you can supply such information then please send it
to BUNDY@RUTGERS. Thanks in advance.
Alan Bundy
------------------------------
Date: 12 Oct 83 0313 PDT
From: Arthur Keller <ARK@SU-AI>
Subject: IntelliGenetics
[Reprinted from the SU-SCORE bboard.]
From Tuesday's SF Chronicle (page 56):
"IntelliGenetics Inc., Palo Alto, has filed with the Securities and
Exchange Commission to sell 1.6 million common shares in late November.
The issue, co-managed by Ladenburg, Thalmann & Co. Inc. of New York
and Freehling & Co. of Chicago, will be priced between $6 and $7 a share.
IntelliGenetics provides artificial intelligence based software for use
in genetic engineering and other fields."
------------------------------
Date: Thursday, 13-Oct-83 16:00:01-BST
From: RICHARD HPS (on ERCC DEC-10) <okeefe.r.a.@edxa>
Reply-to: okeefe.r.a. <okeefe.r.a.%edxa@ucl-cs>
Subject: Zadeh's apartment paradox
The resolution of the paradox lies in realising that
"cheap apartments are expensive"
is not contradictory. "cheap" refers to the cost of
maintaining (rent, bus fares, repairs) the apartment
and "expensive" refers to the cost of procuring it.
The fully stated theorem is
\/x apartment(x) & low(upkeep(x)) =>
difficult←to←procure(x)
\/x difficult←to←procure(x) =>
high(cost←of←procuring(x))
hence \/x apartment(x) & low(upkeep(x)) =>
high(cost←of←procuring(x))
where "low" and "high" can be as fuzzy as you please.
A reasoning system should not conclude that cheap
flats don't exist, but rather that the axioms it has
been given are inconsistent with the assumption that
they do. Sooner or later you are going to tell it
"Jones has a cheap flat", and then it will spot the
flawed axioms.
[I can see your point that one might pay a high price
to procure an apartment with a low rental. There is
an alternate interpretation which I had in mind, however.
The paradox could have been stated in terms of any
bargain, specifically one in which upkeep is not a
factor. One could conclude, for instance, that a cheap
meal is expensive. My own resolution is that the term
"rare" (or "rare and highly sought") must be split into
subconcepts corresponding to the cause of rarity. When
discussing economics, one must always reason separately
about economic rarities such as rare bargains. The second
assertion in the syllogism then becomes "rare and highly
sought objects other than rare bargains are (Zadeh might
add 'usually') expensive", or "rare and highly sought
objects are either expensive or are bargains".
-- Ken Laws ]
------------------------------
Date: Thu 13 Oct 83 03:38:21-CDT
From: Werner Uhrig <CMP.WERNER@UTEXAS-20.ARPA>
Subject: Re: Zadeh Syllogism
Expensive apartments are not highly sought.
Items not in demand are cheap.
-> expensive apartments are cheap.
or The higher the price, the lower the demand.
The lower the demand, the lower the price.
-> the higher the price , the lower the price.
ergo ?? garbage in , garbage out!
Why am I thinking of Reagonomics right now ????
Werner (UUCP: { ut-sally , ut-ngp } !utastro!werner
ARPA: werner@utexas-20
PS: at this time of the day, one gets the urge to voice "weird" stuff ...
-------
[The first form is as persuasive as the original syllogism.
The second seems to be no more than a statement of negative
feedback. Whether the system is stable depends on the nature
of the implied driving forces. It seems we are now dealing
with a temporal logic.
An example of an unstable system is:
The fewer items sold, the higher the unit price must be.
The higher the price, the fewer the items sold.
--------------------------------------------------------
Bankruptcy.
-- KIL]
------------------------------
Date: Wed, 12 Oct 83 13:16 PDT
From: GMEREDITH.ES@PARC-MAXC.ARPA
Subject: Sensitivity Issue and Self-Awareness
I can understand the concern of researcher people about censorship.
However, having worked with an agency which spent time extracting
information of a classified nature from unclassified or semi-secure
sources, I have to say that people not trained in such pursuits are
usually very poor judges of the difference between necessary efforts to
curb flow of classified information and "censorship".
I can also guarantee that this country's government is not the alone in
knowing how to misuse the results of research carried out with the most
noble of intents.
Next, to the subject of self-awareness. The tendency of an individual
to see his/her corporal self as distinct from the *I* experience or to
see others as robots or a kind of illusion is sufficient to win a tag of
'schizophrenic' from any psychiatrist and various other negative
reactions from those involved in other schools of the psychological
community.
Beyond that, the above tendencies make relating to 'real' world
phenomena very difficult. That semi coming around the curve will
continue to follow through on the illusion of having smashed those just
recently discontinued illusions in the on-coming car.
Guy
------------------------------
Date: Wed 12 Oct 83 00:07:15-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Goverment Reviews of Basic Research
I must disagree with Frank Adrian who commented in a previous digest
that "I urge everyone to boycott this conference" and other conferences with
this requirement. The progress of science should not be halted due to some
government ruling, especially since an attempted boycott would have little
positive and (probably) much negative effect. Assuming that all of the
'upstanding' scientists participated, is there any reason to think that
the government couldn't find less discerning researchers more than happy to
accept grant money?
Eric (sorry, no last name) is preoccupied with the fact that government
'paid' for the research; aren't "we" the people the real owners, in that case?
Or can there be real owners of basic knowledge: as I recall, the patent office
has ruled that algorithms are unpatentable and thus inherently public domain.
The control of ideas has been an elusive goal for many governments, but even so,
it is rare for a government to try to claim ownership of an idea as a
justification for restriction; outside of the military domain, this is seems
to be a new one...
As a scientist, I believe that the world and humanity will gain wisdom
and insight though research, and eventually enable us to end war, hunger,
ignorance, whatever. Other forces in the world have different, more short-term
goals, for our work; this is fine, as long as the long-term reasons for
scientific research are not sacrificed. Sure, they 'paid' for the results of
our short-term goals, but we should never allow that to blind us to the real
reason for working in AI, and *NO-ONE* can own that.
So I'll take government money (if they offer me any after this diatribe!)
and work on various systems and schemes, but I'll fight any attempt to
nullify the long term goals I'm really working for. I feel these new
restrictions are detrimental to the long-term goals of scientific search,
but currently, I'm going with things here... we're the best in the world (sigh)
and I plan on fighting to keep it that way.
David Rogers
DRogers@SUMEX-AIM.ARPA
------------------------------
Date: Wed, 12 Oct 83 10:26:28 EDT
From: Morton A. Hirschberg <mort@brl-bmd>
Subject: Flaming Mad
I have refrained from reflaming since I sent the initial
conference announcement on "Intelligent Systems and Machines."
First, the conference is not being sponsored by the US
Government. Second, many papers may be submitted by those
affected by the security release and it seemed necessary to
include this as part of the announcement. Third, I attended the
conference at Oakland earlier this year and it was a super
conference. Fourth, you may bite your nose to spite your face if
you as an individual do not want to submit a paper or attend but
you are not doing much service to those sponsoring the conference
who are true scientists by urging boycotts. Finally, below is a
little of my own philosophy.
I have rarely seen science or the application of science
(engineering) benefit anyone anywhere without an associated cost
(often called an investment). The costs are usually borne by the
investors and if the end product is a success then costs are
passed on to consumers. I can find few examples where
discoveries in science or in the name of science have not
benefited the discoverer and/or his heirs, or the investors.
Many of our early discoveries were made by men of considerable
wealth who could dally with theory and experimentation (and the
arts) and science using their own resources. We may have gained
a heritage but they gained a profit.
What seems to constitute a common heritage is either something
that has been around for so long that it is either in the public
domain or is a romanticized fiction (e.g. Paul Muni playing
Pasteur). Simultaneous discovery has been responsible for many
theories being in the public domain as well as leading to
products which were hotly contested in lawsuits. (e.g. did Bell
really invent the telephone or Edison the movie camera?)
Watson in his book "The Double Helix" gives a clear picture of
what a typical scientist may really be and it is not Arrowsmith.
I did not see Watson refuse his Noble because the radiologist did
not get a prize.
Government, and here for historical reasons we must also include
state and church, has always had a role in the sciences. That
role is one that governments can not always be proud of (Galileo,
Rachael Carson, Sakharov).
The manner in which the United States Government conducts
business gives great latitude to scientists and to investors.
When the US Government buys something it should be theirs just as
when you as an individual buy something. As such it is then the
purview of the US Government as to what to do with the product.
Note the US Government often buys with limited rights of
ownership and distribution.
It has been my observation having worked in private industry,
for a university, and now for the government that relations among
the three has not been optimal and in many cases not mutually
rewarding. This is a great concern of mine and many of my
colleagues. I would like a role in changing relations among the
three and do work toward that as a personal goal. This includes
not referring to academicians as eggheads or charlatans;
industrialists as grubby profiteers; and government employees as
empty-headed bureaucrats.
I recommend that young flamers try to maintain a little naivete
as they mature but not so much that they are ignorant of reality.
Every institution has its structure and by in large one works
within the structure to earn a living or are free to move on or
can work to change that structure. One possible change is for
the US Government to conduct business the way the the Japanese do
(at least in certain cases). Maybe AI is the place to start.
I also notice that mail on the net comes across much harsher
than it is intended to be. This can be overcome by being as
polite as possible and being more verbose. In addition, one can
read their mail more than once before flaming.
Mort
------------------------------
End of AIList Digest
********************
∂14-Oct-83 0224 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #37
Received: from SU-SCORE by SU-AI with TCP/SMTP; 14 Oct 83 02:24:08 PDT
Date: Thursday, October 13, 1983 8:54PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #37
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Friday, 14 Oct 1983 Volume 1 : Issue 37
Today's Topics:
Implementations - User Convenience Vs. Elegance
Assert & Setof & Retract,
LP Library - Setof.pl Available
----------------------------------------------------------------------
Date: Wed 12 Oct 83 10:14:05-PDT
From: Pereira@SRI-AI
Subject: User Convenience Vs. Elegance
It is my view that the ultimate in user convenience will come
from the ultimate in elegance & logical purity. The reason is
simple: elegance & purity make predictability. Now, one sometimes
hears that "elegance & purity" are "too mathematical" for users
to operate conveniently with. I have several arguments agains
this view:
1. It shows a contempt of user's abilities not supported by fact:
each time I have failed to explain "elegant & pure" concepts
to "users" was because I was muddled about them myself.
2. A "pure & elegant" system can more easily be modeled by a program
which will guide the debugging process. This is the philosophy
behind Shapiro's "algorithmic program debugging", which is a model
from which future Prolog debugging tools should greatly benefit.
3. "Convenience for the user" is often a mask for "ignorance and
laziness of the implementer". If an implementer says "I can't
justify this feature on elegance & purity grounds, but it is
convenient for the user" it suggests that he has not thought
through what the user is really trying to do, or can't be bothered
constructing a higher level model of the desired behavior. The
best current debugger for Prolog programs, Lawrence Byrd's one
on DEC-10/20 Prolog (it may exist also in other Prologs I don't
know of) comes from an elegant model of and/or tree search. Type
errors in evaluable predicate arguments should be caught by some
general typing mechanism as Richard suggests, not by "ad hoc"
catches not motivated by semantics which 50% of the time will get
in the way of the user who knows what he is doing.
4. Prolog systems seem unfortunately to be going the way of Lisp
systems, if it doesn't work, add a feature. If Prolog programming
is to survive into the 21st century as the tool of choice for
"reasoning programs", Prolog systems must be cleaner, simpler and
more powerful (more predictable), and that will come only from
better theoretical understanding of what is to be done. The
following are areas from which new systems would most benefit:
- building ideas of algorithmic program debugging
into Prolog systems;
- understanding type checking as compile-time deduction
of a programs consistency with type axioms;
- inventing a logically-motivated module mechanism
-- Fernando Pereira
------------------------------
Date: Wed 12 Oct 83 09:43:40-PDT
From: Pereira@SRI-AI
Subject: Assert & Retract
If convenience for the programmer is what you are after, letting
assertz modify running procedures is in at least 50% of the cases
what you need. For example, to compute bottom-up the transitive
closure c(X,Y) of r(U,V), do
close :-
connected(X,Y),
connected(Y,Z),
add←c(Y,Z),
fail.
close.
connected(X,Y) :- r(X,Y).
connected(X,Y) :- c(X,Y).
add←c(X,Y) :-
not←subsumed(c(X,Y)),
assertz(c(X,Y)).
not←subsumed(P) :- "the usual stuff".
Programs of this kind can implement all sorts of bottom-up and mixed
initiative deductive procedures, such as those derived from CF parsing
algorithms like Cocke-Kasami-Young and Earley. What is interesting
about these applications is that the implementation code is not pure
logic, but they use the Prolog machinery to implement another correct
inference procedure, which can be abstracted as the computation of the
transitive closure of some relation (CKY has often been presented this
way).
So, either assertz is allowed to change running procedures, or
two brands of assertz are created to provide the two alternative
behaviors. In any case, trying to abstract frequently used
operations as Richard has done is a good thing.
-- Fernando Pereira
------------------------------
Date: Wed 12 Oct 83 09:20:17-PDT
From: Pereira@SRI-AI
Subject: Setof, Bagof et al
Warren's setof/bagof, implemented on DEC-10/20 Prolog and C-Prolog,
has some very interesting properties, not shared by the competition:
1. Variables not bound explicitly by some special operator (setof,
↑,...) are universally quantified at the outermost level. This
is a simple and uniform convention. To see what it buys you
2. setof((X,S),setof(Y,p(X,Y),S),SS) binds SS to the list
representation of
{(x,s): s = {y: p(x,y)}}
the set of pairs (x,s) such that s is the set of y's such that
p(x,y). This is the only reasonable interpretation of nested
set expressions, and cannot be done with findall. I use things
like this all the time (what are set expressions for, if not
to do what sets do...). Recently, I've been helping someone to
move one of my programs to a Prolog lacking setof/bafof, and
what a pain it has been !
3. calls to findall are trivial to simulate with bagof, but
implementing bagof in terms of findall is rather tricky (that's
what David's implementation is all about).
What Richard's complaint was about was the abuse of the name "bagof"
to mean something different from what it had been used to mean in the
literature, and for which there was already a perfectly good name,
"findall". Of course, the Humpty Dumpty school of semantics thinks
nothing of such abuses...
-- Fernando Pereira
------------------------------
Date: Wed 12 Oct 83 13:25:37-PDT
From: Pereira@SRI-AI
Subject: Sackcloth and ashes...
I am adding this as a footnote to my previous messages, before
supernatural wrath smothers me...
Of course, I have been one of the contributors to the Lispish
proliferation of features found in some Prolog systems. My comments
are motivated by the urgency of solving properly the problems those
features address before the features, whose alleged necessity was
more often than not justified on efficiency or space grounds, are
erected into a standard that will get in the way of future
implementations.
-- Fernando Pereira
------------------------------
Date: Tuesday, 11-Oct-83 22:05:55-BST
From: Richard HPS (on ERCC DEC-10) <OKeefe.R.A.@EDXA>
Subject: bagof & setof
% File : SETOF.PL
% Author : R.A.O'Keefe
% Updated: 11 October 1983
% Purpose: Define set←of/3 and bag←of/3
/* This file defines two predicates which act like setof/3 and
bagof/3. I have seen the code for these routines in DEC-10
and in C-Prolog, but I no longer recall it, and this code was
independently derived in 1982 by me and me alone.
Most of the complication comes from trying to cope with free
variables in the Filter; these definitions actually enumerate
all the solutions, then group together those with the same
bindings for the free variables. There must be a better way of
doing this. I do not claim any virtue for this code other than
the virtue of working. In fact there is a subtle bug: if
setof/bagof occurs as a data structure in the Generator it will
be mistaken for a call, and free variables treated wrongly.
Given the current nature of Prolog, there is no way of telling
a call from a data structure, and since nested calls are FAR more
likely than use as a data structure, we just put up with the
latter being wrong. The same applies to negation.
Would anyone incorporating this in their Prolog system please
credit both me and David Warren; he thought up the definitions,
and my implementation may owe more to subconscious memory of his
than I like to think. At least this ought to put a stop to
fraudulent claims to having bagof, by replacing them with
genuine claims.
*/
[ setof.pl is available through FTP as {SU-SCORE}PS:<PROLOG>Setof.Pl
If you do not have ARPAnet access, I have a LIMITED number of
hardcopy listings that could be mailed. -ed ]
------------------------------
End of PROLOG Digest
********************
∂14-Oct-83 1020 CLT SEMINAR IN LOGIC AND FOUNDATIONS
To: "@DIS.DIS[1,CLT]"@SU-AI
SPEAKER: Prof. J. E. Fenstad, University of Oslo
TITLE: Hyperfinite probability theory; basic ideas and applications
in natural sciences
TIME: Wednesday, Oct. 19, 4:15-5:30 PM
PLACE: Stanford Mathematics Dept. Faculty Lounge, 383N
The talk will assume some acquaintance with non-standard analysis
(existence of the extensions, transfer). But the ideas of hyperfinite
probability theory (e.g. Loeb construction) will be explained before turning
to applications, which will mainly be to hyperfinite spin systems
(statistical mechanics, polymer models, field theory). The models
will be fully explained, so no knowledge of "advanced" physics is presupposed.
S. Feferman
∂14-Oct-83 1113 ELYSE@SU-SCORE.ARPA Announcement of DoD-University Program for 1984/85
Received: from SU-SCORE by SU-AI with TCP/SMTP; 14 Oct 83 11:13:21 PDT
Date: Fri 14 Oct 83 11:14:01-PDT
From: Elyse Krupnick <ELYSE@SU-SCORE.ARPA>
Subject: Announcement of DoD-University Program for 1984/85
To: faculty@SU-SCORE.ARPA
Stanford-Phone: (415) 497-9746
In my office, on top of the file cabinets, you can read of the DoD-University
research Instrumentation Program for 1984/85. This comes from the Army Research
Office, Office of Naval Research, Air Force Office of Scientific Research.
-------
∂14-Oct-83 1545 LAWS@SRI-AI.ARPA AIList Digest V1 #77
Received: from SRI-AI by SU-AI with TCP/SMTP; 14 Oct 83 15:44:18 PDT
Date: Friday, October 14, 1983 9:36AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #77
To: AIList@SRI-AI
AIList Digest Friday, 14 Oct 1983 Volume 1 : Issue 77
Today's Topics:
Natural Language - Semantic Chart Parsing & Macaroni & Grammars,
Games - Rog-O-Matic,
Seminar - Nau at UMaryland, Diagnostic Problem Solving
----------------------------------------------------------------------
Date: Wednesday, 12 October 1983 14:01:50 EDT
From: Robert.Frederking@CMU-CS-CAD
Subject: "Semantic chart parsing"
I should have made it clear in my previous note on the subject that
the phrase "semantic chart parsing" is a name I've coined to describe a
parser which uses the technique of syntactic chart parsing, but includes
semantic information right from the start. In a way, it's an attempt to
reconcile Schank-style immediate semantic interpretation with syntactically
oriented parsing, since both sources of information seem worthwhile.
------------------------------
Date: Wednesday, 12-Oct-83 17:52:33-BST
From: RICHARD HPS (on ERCC DEC-10) <okeefe.r.a.@edxa>
Reply-to: okeefe.r.a. <okeefe.r.a.%edxa@ucl-cs>
Subject: Natural Language
There was rather more inflammation than information in the
exchanges between Dr Pereira and Whats-His-Name-Who-Butchers-
Leprechauns. Possibly it's because I've only read one or two
[well, to be perfectly honest, three] papers on PHRAN and the
others in that PHamily, but I still can't see why it is that
their data structures aren't a grammar. Admittedly they don't
look much like rules in an XG, but then rules in an XG don't
look much like an ATN either, and no-one has qualms about
calling ATNs grammars. Can someone please explain in words
suitable for a 16-year-old child what makes phrasal analysis
so different from
XGs (Extraposition grammars, include DCGS in this)
ATNs
Marcus-style parsers
template-matching
so different that it is hailed as "solving" the parsing problem?
I have written grammars for tiny fragments of English in DCG,
ATN, and PIDGIN -styles [the adverbs get me every time]. I am not
a linguist, and the coverage of these grammars was ludicrously
small. So my claim that I found it vastly easier to extend and
debug the DCG version [DCGs are very like EAGs] will probably be
dismissed with the contempt it deserves. Dr Pereira has published
his parser, and in other papers has published an XG interpreter.
I believe a micro-PHRAN has been published, and I would be grateful
for a pointer to it. Has anyone published a phrasal-analysis
grimoire (if the term "grammar" doesn't suit) with say >100 "things"
(I forget the right name for the data structures), and how can I
get a copy?
People certainly can accept ill-formed sentences. But they DO
have quite definite notions of what is a well-formed sentence and
what is not. I was recently in a London Underground station, and
saw a Telecom poster. It was perfectly obvious that it was written
by an Englishman trying to write in American. It finally dawned on
me that he was using American vocabulary and English syntax. At
first sight the poster read easily enough, and the meaning came through.
But it was sufficiently strange to retain my attention until I saw what
was odd about it. Our judgements of grammaticality are as sensitive as
that. [I repeat, I am no linguist. I once came away from a talk by
Gazdar saying to one of my fellow students, who was writing a parser:
"This extraposition, I don't believe people do that."] I suggest that
people DO learn grammars, and what is more, they learn them in a form
that is not wholly unlike [note the caution] DCGs or ATNs. We know that
DCGs are learnable, given positive and negative instances. [Oh yes,
before someone jumps up and down and says that children don't get
negative instances, that is utter rubbish. When a child says something
and is corrected by an adult, is that not a negative instance? Of course
it is!] However, when people APPLY grammars for parsing, I suggest that
they use repair methods to match what they hear against what they
expect. [This is probably frames again.] These repair methods range
all the way from subconscious signal cleaning [coping with say a lisp]
to fully conscious attempts to handle "Colourless Green ideas sleep
furiously". [Maybe parentheses like this are handled by a repair
mechanism?] If this is granted, some of the complexity required to
handle say ellipsis would move out of the grammar and into the repair
mechanisms. But if there is anything we know about human psychology,
it is that people DO have repair mechanisms. There is a lot of work
on how children learn mathematics [not just Brown & co], and it turns
out that children will go to extraordinary lengths to patch a buggy
hack rather than admit they don't know. So the fact that people can
cope with ungrammatical sentences is not evidence against grammars.
As evidence FOR grammars, I would like to offer Macaroni. Not
the comestible, the verse form. Strictly speaking, Macaroni is a
mixture of the vernacular and Latin, but since it is no longer
popular we can allow any mixture of languages. The odd thing about
Macaroni is that people can judge it grammatical or ungrammatical,
and what is more, can agree about their judgements as well as they
can agree about the vernacular or Latin taken separately. My Latin
is so rusty there is no iron left, so here is something else.
[Prolog is] [ho protos logos] [en programmation logiciel]
English Greek French
This of course is (NP copula NP) PP, which is admissible in all
three languages, and the individual chunks are well-formed in their
several languages. The main thing about Macaroni is that when
two languages have a very similar syntactic class, such as NP,
a sentence which starts off in one language may rewrite that
category in the other language, and someone who speaks both languages
will judge it acceptable. Other ways of dividing up the sentence are
not judged acceptable, e.g.
Prolog estin ho protos mot en logic programmation
is just silly. S is very similar in most languages, which would account
for the acceptability of complete sentences in another language. N is
pretty similar too, and we feel no real difficulty with single isolated
words from other languages like "chutzpa" or "pyjama" or "mana". When
the syntactic classes are not such a good match, we feel rather more
uneasy about the mixture. For example, "[ka ora] [teenei tangata]"
and "[these men] [are well]" both say much the same thing, but because
the Maaori nominal phrase and the English noun phrase aren't all that
similar, "[teenei tangata] [are well]" seems strained.
The fact that bilingual people have little or no difficulty with
Macaroni is just as much a fact as the fact the people in general have
little difficulty with mildly malformed sentences. Maybe they're the
same fact. But I think the former deserves as much attention as the
latter.
Does anyone have a parser with a grammar for English and a grammar
for [UK -> French or German; Canada -> French; USA -> Spanish] which use
the same categories as far as possible? Have a go at putting the two
together, and try it on some Macaroni. I suspect that if you have some
genuinely bilingual speakers to assist you, you will find it easier to
develo/correc the grammars together than separately. [This does not
hold for non-related languages. I would not expect English and Japanese
to mix well, but then I don't know any Japanese. Maybe it's worth trying.]
------------------------------
Date: Thu 13 Oct 83 11:07:26-PDT
From: WYLAND@SRI-KL.ARPA
Subject: Dave Curry's request for a Simple English Grammer
I think the book "Natural Language Information
Processing" by Naomi Sager (Addison-Wesley, 1981) may be useful.
This book represents the results of the Linguistic String project
at New York University, and Dr. Sager is its director. The book
contains a BNF grammer set of 400 or so rules for parsing English
sentences. It has been applied to medical text, such as
radiology reports and narrative documents in patient records.
Dave Wyland
WYLAND@SRI
------------------------------
Date: 11 Oct 83 19:41:39-PDT (Tue)
From: harpo!utah-cs!shebs @ Ucb-Vax
Subject: Re: WANTED: Simple English Grammar - (nf)
Article-I.D.: utah-cs.1994
(Oh no, here he goes again! and with his water-cooled keyboard too!)
Yes, analysis of syntax alone cannot possibly work - as near as I can
tell, syntax-based parsers need an enormous amount of semantic processing,
which seems to be dismissed as "just pragmatics" or whatever. I'm
not an "in" member of the NLP community, so I haven't been able to
find out the facts, but I have a bad feeling that some of the well-known
NLP systems are gigantic hacks, whose syntactic analyzer is just a bag
hanging off the side, but about which all the papers are written. Mind
you, this is just a suspicion, and I welcome any disproof...
stan the l.h.
utah-cs!shebs
------------------------------
Date: 7 Oct 83 9:54:21-PDT (Fri)
From: decvax!linus!vaxine!wjh12!foxvax1!brunix!rayssd!asa @ Ucb-Vax
Subject: Re: WANTED: Simple English Grammar - (nf)
Article-I.D.: rayssd.187
date: 10/7/83
Yesterday I sent a suggestion that you look at Winograd's
new book on syntax. Upon reflection, I realized that there are
several aspects of syntax not clearly stated therein. In particular,
there is one aspect which you might wish to think about, if you
are interested in building models and using the 'expectations'
approach. This aspect has to do with the synergism of syntax and
semantics. The particular case which occured to me is an example
of the specific ways that Latin grammar terminology is innapropriate
for English. In English, there is no 'present' tense in the intuitive
sense of that word. The stem of the verb (which Winograd calls the
'infinitive' form, in contrast to the traditional use of this term to
signify the 'to+stem' form) actually encodes the semantic concept
of 'indefinite habitual' Thus, to say only 'I eat.' sounds
peculiar. When the stem is used alone, we expect a qualifier, as in
'I eat regularly', or 'I eat very little', or 'I eat every day'. In
this framework, there is a connection with the present, in the sense
that the process described is continuous, has existed in the past,
and is expected to continue in the future. Thus, what we call the
'present' is really a 'modal' form, and might better be described
as the 'present state of a continuing habitual process'. If we wish
to describe something related to our actual state at this time,
we use what I think of as the 'actual present', which is 'I am eating'.
Winograd hints at this, especially in Appendix B, in discussing verb
forms. However, he does not go into it in detail, so it might help
you understand better what's happening if you keep in mind the fact
that there exist specific underlying semantic functions being
implemented, which are in turn based on the ltype of information
to be conveyed and the subtlety of the disinctions desired. Knowing
this at the outset may help you decide the elements you wish to
model in a simplified program. It will certainly help if you
want to try the expectations technique. This is an ideal situation
in which to try a 'blackboard' type of expert system, where the
sensing, semantics, and parsing/generation engines operate in
parallel. Good luck!
A final note: if you would like to explore further a view
of grammar which totally dispenses with the terms and concepts of
Latin grammar, you might read "The Languages of Africa" (I think
that's the title), by William Welmer.
By the way! Does anyone out there know if Welmer ever published
his fascinating work on the memory of colors as a function of time?
Did it at least get stored in the archives at Berkeley?
Asa Simmons
rayssd!asa
------------------------------
Date: Thursday, 13 October 1983 22:24:18 EDT
From: Michael.Mauldin@CMU-CS-CAD
Subject: Total Winner
@ @ @ @ @ @@@ @ @
@ @ @@ @@ @ @ @ @
@ @ @@@ @ @ @ @@@ @@@@ @@@ @ @@@ @
@@@@@ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @@@@@ @ @ @@@@ @ @ @@@@@ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @@@ @ @ @@@@ @@@@ @@@ @@@ @@ @
Well, thanks to the modern miracles of parallel processing (i.e. using
the UUCPNet as one giant distributed processor) Rog-O-Matic became an
honest member of the Fighter's guild on October 10, 1983. This is the
fourth total victory for our Heuristic Hero, but the first time he has
done so without using a "Magic Arrow". This comes only a year and two
weeks after his first total victory. He will be two years old on
October 19. Happy Birthday!
Damon Permezel of Waterloo was the lucky user. Here is his announcement:
- - - - - - - -
Date: Mon, 10 Oct 83 20:35:22 PDT
From: allegra!watmath!dapermezel@Berkeley
Subject: total winner
To: mauldin@cmu-cs-a
It won! The lucky SOB started out with armour class of 1 and a (-1,0)
two handed sword (found right next to it on level 1). Numerous 'enchant
armour' scrolls were found, as well as a +2 ring of dexterity, +1 add
strength, and slow digestion, not to mention +1 protection. Luck had an
important part to play, as initial confrontations with 'U's got him
confused and almost killed, but for the timely stumbling onto the stairs
(while still confused). A scroll of teleportation was seen to be used to
advantage once, while it was pinned between 2 'X's in a corridor.
- - - - - - - -
Date: Thu, 13 Oct 83 10:58:26 PDT
From: allegra!watmath!dapermezel@Berkeley
To: mlm@cmu-cs-cad.ARPA
Subject: log
Unfortunately, I was not logging it. I did make sure that there
were several witnesses to the game, who could verify that it (It?)
was a total winner.
- - - - - - - -
The paper is still available; for a copy of "Rog-O-Matic: A Belligerent
Expert System", please send your physical address to "Mauldin@CMU-CS-A"
and include the phrase "paper request" in the subject line.
Michael Mauldin (Fuzzy)
Department of Computer Science
Carnegie-Mellon University
Pittsburgh, PA 15213
(412) 578-3065, mauldin@cmu-cs-a.
------------------------------
Date: 13 Oct 83 21:35:12 EDT (Thu)
From: Dana S. Nau <dsn%umcp-cs@CSNet-Relay>
Subject: University of Maryland Colloquium
University of Maryland
Department of Computer Science
Colloquium
Monday, October 24 -- 4:00 PM
Room 2324 - Computer Science Building
A Formal Model of Diagnostic Problem Solving
Dana S. Nau
Computer Science Dept.
University of Maryland
College Park, Md.
Most expert computer systems are based on production rules, and to
some readers the terms "expert computer system" and "production rule
system" may seem almost synonymous. However, there are problem domains
for which the usual production rule techniques appear to be inadequate.
This talk presents a useful alternative to rule-based problem
solving: a formal model of diagnostic problem solving based on a
generalization of the set covering problem, and formalized algorithms
for diagnostic problem solving based on this model. The model and the
resulting algorithms have the following features:
(1) they capture several intuitively plausible features of human
diagnostic inference;
(2) they directly address the issue of multiple simultaneous causative
disorders;
(3) they can serve as a basis for expert systems for diagnostic problem
solving; and
(4) they provide a conceptual framework within which to view recent
work on diagnostic problem solving in general.
Coffee and refreshments - Rm. 3316 - 3:30
------------------------------
End of AIList Digest
********************
∂14-Oct-83 2049 LAWS@SRI-AI.ARPA AIList Digest V1 #78
Received: from SRI-AI by SU-AI with TCP/SMTP; 14 Oct 83 20:49:25 PDT
Date: Friday, October 14, 1983 2:25PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #78
To: AIList@SRI-AI
AIList Digest Saturday, 15 Oct 1983 Volume 1 : Issue 78
Today's Topics:
Philosophy - Dedekind & Introspection,
Rational Psychology - Conectionist Models,
Creativity - Intuition in Physics,
Conference - Forth,
Seminar - IUS Presentation
----------------------------------------------------------------------
Date: 10 Oct 83 11:54:07-PDT (Mon)
From: decvax!duke!unc!mcnc!ncsu!uvacs!mac @ Ucb-Vax
Subject: consciousness, loops, halting problem
Article-I.D.: uvacs.983
With regard to loops and consciousness, consider Theorem 66 of Dedekind's
book on the foundations of mathematics, "Essays on the Theory of Numbers",
translated 1901. This is the book where the Dedekind Cut is invented to
characterize irrational numbers.
64. Definition. A system S is said to be infinite when it
is similar to a proper part of itself; in the contrary case
S is said to be a finite system.
66. Theorem. There exist infinite systems. Proof. My own
realm of thoughts, i.e. the totality S of all things, which
can be objects of my thought, is infinite. For if s
signifies an element of S, then is the thought s', that s
can be object of my thought, itself an element of S. If we
regard this as transform phi(s) of the element s then has
the transformation phi of S, thus determined, the property
that the transform S' is part of S; and S' is certainly
proper part of S, because there are elements of S (e.g. my
own ego) which are different from such thought s' and
therefore are not contained in S'. Finally it is clear that
if a, b are different elements of S, their transformation
phi is a distinct (similar) transformation. Hence S is
infinite, which was to be proved.
For that matter, net.math seems to be in a loop. They were discussing the
Banach-Tarski paradox about a year ago.
Alex Colvin
ARPA: mac.uvacs@UDel-Relay CS: mac@virginia USE: ...uvacs!mac
------------------------------
Date: 8 Oct 83 13:53:38-PDT (Sat)
From: hplabs!hao!seismo!rochester!blenko @ Ucb-Vax
Subject: Re: life is but a dream
Article-I.D.: rocheste.3318
The statement that consciousness is an illusion does not mean it does
not or cannot have a concrete realization. I took the remarks to mean
simply that the entire mental machinery is not available for
introspection, and in its place some top-level "picture" of the process
is made available. The picture need not reflect the details of internal
processing, in the same way that most people's view of a car does not
bear much resemblance to its actual mechanistic internals.
For those who may not already be aware, the proposal is not a new one.
I find it rather attractive, admitting my own favorable
predisposition towards the proposition that mental processing is
computational.
I still think this newsgroup would be more worthwhile if readers
adopted a more tolerant attitude. It seems to be the case that there is
nearly always a silly interpretation of someone's contribution;
discovering that interpretation doesn't seem to be a very challenging
task.
Tom Blenko
blenko@rochester
decvax!seismo!rochester!blenko
allegra!rochester!blenko
------------------------------
Date: 11 Oct 83 9:37:52-PDT (Tue)
From: hplabs!hao!seismo!rochester!gary @ Ucb-Vax
Subject: Re: "Rational Psychology"
Article-I.D.: rocheste.3352
This is in response to John Black's comments, to wit:
> Having a theoretical (or "rational" -- terrible name with all the wrong
> connotations) psychology is certainly desirable, but it does have to make
> some contact with the field it is a theory of. One of the problems here is
> that the "calculus" of psychology has yet to be invented, so we don't have
> the tools we need for the "Newtonian mechanics" of psychology. The latest
> mathematical candidate was catastrophe theory, but it turned out to be a
> catastrophe when applied to human behavior. Perhaps Periera and Doyle have
> a "calculus" to offer.
This is an issue I (and I think many AI'ers) are particularly interested in,
that is, the correspondence between our programs and the actual workings of
the mind. I believe that an *explanatory* theory of behavior will not be at
the functional level of correspondence with human behavior. Theories which are
at the functional level are important for pinpointing *what* it is that people
do, but they don't get a handle on *how* they do it. And, I think there are
side-effects of the architecture of the brain on behavior that do not show up
in functional level models.
This is why I favor (my favorite model!) connectionist models as being a
possible "calculus of Psychology". Connectionist models, for those unfamiliar
with the term, are a version of neural network models developed here at
Rochester (with related models at UCSD and CMU) that attempts to bring the
basic model unit into line with our current understanding of the information
processing capabilities of neurons. The units themselves are relatively stupid
and slow, but have state, and can compute simple functions (not restricted to
linear). The simplicity of the functions is limited only by "gentleman's
agreement", as we still really have no idea of the upper limit of neuronal
capabilities, and we are guided by what we seem to need in order to accomplish
whatever task we set them to. The payoff is that they are highly connected to
one another, and can compute in parallel. They are not allowed to pass symbol
structures around, and have their output restricted to values in the range
1..10. Thus we feel that they are most likely to match the brain in power.
The problem is how to compute with the things! We regard the outcome of a
computation to be a "stable coalition", a set of units which mutually
reinforce one another. We use units themselves to represent values of
parameters of interest, so that mutually compatible values reinforce one
another, and mutually exclusive values inhibit one another. These could
be the senses of the words in a sentence, the color of a patch in the
visual field, or the direction of intended eye movement. The result is
something that looks a lot like constraint relaxation.
Anyway, I don't want to go on forever. If this sparks discussion or interest
references are available from the U. of R. CS Dept. Rochester, NY 14627.
(the biblio. is a TR called "the Rochester Connectionist Papers").
gary cottrell (allegra or seismo)!rochester!gary or gary@rochester
------------------------------
Date: 10 Oct 83 8:00:59-PDT (Mon)
From: harpo!eagle!mhuxi!mhuxj!mhuxl!mhuxm!pyuxi!pyuxn!rlr @ Ucb-Vax
Subject: Re: RE: Intuition in Physics
Article-I.D.: pyuxn.289
> I presume that at birth, ones mind is not predisposed to one or another
> of several possible theories of heavy molecule collision (for example.)
> Further, I think it unlikely that personal or emotional interaction in
> one "pre-analytic" stage (see anything about developmental psych.) is
> is likely to bear upon ones opinions about those molecules. In fact I
> find it hard to believe that anything BUT technical learning is likely
> to bear on ones intuition about the molecules. One might want to argue
> that ones personality might force you to lean towards "aggressive" or
> overly complex theories, but I doubt that such effects will lead to
> the creation of a theory. Only a rather mild predisposition at best.
> In psychology it is entirely different. A person who is agresive has
> lots of reasons to assume everyone else is as well. Or paranoid, or
> that rote learning is esp good or bad, or that large dogs are dangerous
> or a number of other things that bear directly on ones theories of the
> mind. And these biases are aquired from the process of living and are
> quite un-avoidable.
The author believes that, though behavior patterns and experiences in a
person's life may affect their viewpoint in psychological studies, this
does not apply in "technical sciences" (not the author's phrasing, and not
mine either---I just can't think of another term) like physics. It would
seem that flashes of "insight" obtained by anyone in a field involving
discovery have to be based on both the technical knowledge that the person
already has AND the entire life experience up to that point. To oversimplify,
if one has never seen a specific living entity (a flower, a specific animal)
or witnessed a physical event, or participated in a particular human
interaction, one cannot base a proposed scientific model on these things, and
these flashes are often based on such analogies to reality.
------------------------------
Date: 9 Oct 83 14:38:45-PDT (Sun)
From: decvax!genrad!security!linus!utzoo!utcsrgv!utcsstat!laura @
Ucb-Vax
Subject: Re: RE: Intuition in Physics
Article-I.D.: utcsstat.1251
Gary,
I don't know about why you think about physics, but I know something about
why *I* think about physics. You see, i have this deep fondness for
"continuous creation" as opposed to "the big bang". This is too bad for me,
since "big bang" appears to be correct, or at any rate, "continuous
creation" appears to be *wrong*. Perhaps what it more correct is
"bang! sproiinngg.... bang!" or a series of bangs, but this is not
the issue.
these days, if you ask me to explain the origins of the universe, from
a physical point of veiw I am going to discuss "big bang". I can do this.
It just does not have the same emotional satisfaction to me as "c c"
but that is too bad for me, I do not go around spreading antiquidated
theories to people who ask me in good faith for information.
But what if the evidence were not all in yet? What if there were an
equal number of reasons to believe one or the other? What would I be
doing? talking about continuous creation. i might add a footnote that
there was "this other theory ... the big bang theory" but I would not
discuss it much. I have that strong an emotional attatchment to
"continuous creation".
You can also read that other great issues in physics and astronomy had
their great believers -- there were the great "wave versus particle"
theories of light, and The Tycho Brahe cosmology versus the Kepler
cosmology, and these days you get similar arguments ...
In 50 years, we may all look back and say, well, how silly, everyone
should have seen that X, since X is now patently obvious. This will
explain why people believe X now, but not why people believed X then,
or why people DIDN'T believe X then.
Why didn't Tycho Brahe come up with Kepler's theories? It wasn't
that Kepler was a better experiementer, for Kepler himself admits
that he was a lousy experimenter and Brahe was reknowned for having
the best instraments in the world, and being the most painstaking
in measurements. it wasn't that they did not know each other, for
Kepler worked with Brahe, and replaced him as Royal Astronomer, and
was familiar with his work before he ever met Brahe...
It wasn't that Brahe was religious and Kepler was not, for it was
Kepler that was almost made a minister and studied very hard in Church
schools (which literally brought him out of peasantry into the middle
class) while Brahe, the rich nobleman, could get away with acts that
the church frowned upon (to put if mildly).
Yet Kepler was able to think in terms of Heliocentric, while Brahe,
who came so...so..close balked at the idea and put the sun circling
the earth while all the other planets circled the sun. Absolutely
astonishing!
I do not know where these differences came from. However, I have a
pretty good idea why continuous creation is more emotionally satisfying
for me than "big bang" (though these days I am getting to like
"bang! sproing! bang!" as well.) As a child, i ran across the "c c"
theory at the same time as i ran across all sorts of the things that
interest me to this day. In particular, I recall reading it at the
same time that I was doing a long study of myths, or creation myths
in particular. Certain myths appealed to me, and certain ones did not.
In particular, the myths that centred around the Judeao-Christian
tradition (the one god created the world -- boom!) had almost no
appeal to me those days, since I had utter and extreme loathing for
the god in question. (this in turn was based on the discovery that
this same wonderful god was the one that tortured and burned millions
in his name for the great sin of heresy.) And thus, "big bang"
which smacked of "poof! god created" was much less favoured by me
at age 8 than continuous creation (no creator necessary).
Now that I am older, I have a lot more tolerance for Yaveh, and
I do not find it intollerable to believe in the Big Bang. However,
it is not as satisfying. Thus I know that some of my beliefs
which in another time could have been essential to my scientific
theories and inspirations, are based on an 8-year-old me reading
about the witchcraft trials.
It seems likely that somebody out there is furthering science by
discovering new theories based on ideas which are equally scientific.
Laura Creighton
utzoo!utcsstat!laura
------------------------------
Date: Fri 14 Oct 83 10:50:52-PDT
From: WYLAND@SRI-KL.ARPA
Subject: FORTH CONVENTION ANNOUNCEMENT
5TH ANNUAL FORTH NATIONAL CONVENTION
October 14-15, 1983
Hyatt Palo Alto
4920 El Camino real
Palo Alto, CA 94306
Friday 10/14: 12:00-5:00 Conference and Exhibits
Saturday 10/15: 9:00-5:00 Conference and Exhibits
7:00 Banquet and Speakers
This FORTH convention includes sessions on:
Relational Data Base Software - an implementation
FORTH Based Instruments - implementations
FORTH Based Expert Systems - GE DELTA system
FORTH Based CAD system - an implementation
FORTH Machines - hardware implementations of FORTH
Pattern Recognition Based Programming System - implementation
Robotics Uses - Androbot
There are also introductory sessions and sessions on
various standards. Entry fee is $5.00 for the sessions and
exhibits. The banquet features Tom Frisna, president of
Androbot, as the speaker (fee is $25.00).
------------------------------
Date: 13 Oct 1983 1441:02-EDT
From: Sylvia Brahm <BRAHM@CMU-CS-C.ARPA>
Subject: IUS Presentation
[Reprinted from the CMU-C bboard.]
George Sperling from NYU and Bell Laboratories will give a talk
on Monday, October 17, 3:30 to 5:00 in Wean Hall 5409.
Title will be Image Processing and the Logic of Perception.
This talk is not a unification but merely the temporal juxta-
position of two lines of research. The logic of perception
invoves using unreliable, ambiguous information to arrive at
a categorical decision. Critical phenomena are multiple stable
states (in response to the same external stimulus) and path
dependence (hysteresis): the description is potential theory.
Neural models with local inhibitory interaction are the ante-
cedents of contemporary relaxation methods. New (and old)
examples are provided from binocular vision and depth perception,
including a polemical demonstration of how the perceptual decision
of 3D structure in a 2D display can be dominated by an irrelevant
brightness cue.
Image processing will deal with the practical problem of squeezing
American Sign Language (ASL) through the telephone network.
Historically, an image (e.g., TV @4MHz) has been valued at more
than 10@+(3) speech tokens (e.g., telephone @3kHz). With image-
processed ASL, the ratio is shown to be approaching unity.
Movies to illustrate both themes will be shown. Appointments to
speak with Dr. Sperling can be made by calling x3802.
------------------------------
End of AIList Digest
********************
∂15-Oct-83 1036 CLT SEMINAR IN LOGIC AND FOUNDATIONS
To: "@DIS.DIS[1,CLT]"@SU-AI
SPEAKER: Prof. J. E. Fenstad, University of Oslo
TITLE: Hyperfinite probability theory; basic ideas and applications
in natural sciences
TIME: Wednesday, Oct. 19, 4:15-5:30 PM
PLACE: Stanford Mathematics Dept. Faculty Lounge, 383N
The talk will assume some acquaintance with non-standard analysis
(existence of the extensions, transfer). But the ideas of hyperfinite
probability theory (e.g. Loeb construction) will be explained before turning
to applications, which will mainly be to hyperfinite spin systems
(statistical mechanics, polymer models, field theory). The models
will be fully explained, so no knowledge of "advanced" physics is presupposed.
S. Feferman
∂16-Oct-83 1501 BRODER@SU-SCORE.ARPA Next AFLB talk(s)
Received: from SU-SCORE by SU-AI with TCP/SMTP; 16 Oct 83 15:00:58 PDT
Date: Sun 16 Oct 83 15:01:03-PDT
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Next AFLB talk(s)
To: aflb.all@SU-SCORE.ARPA
cc: sharon@SU-SCORE.ARPA
Stanford-Office: MJH 325, Tel. (415) 497-1787
N E X T A F L B T A L K (S)
10/20/83 - Ken Clarkson (Stanford):
"Algorithms for the All Nearest Neighbor Problem"
The all nearest neighbor problem is the following: given a set A of n
points in d-dimensional Euclidean space, find the nearest neighbors in
set A of each point in A. The best worst-case algorithm known for
this problem in computational geometry requires O(n log↑(d-1) n) time.
I will describe a simple approximation algorithm requiring O(n log
epsilon) time, an algorithm employing random sampling that requires
O(n log n) expected time for any input set A, and an algorithm with
linear expected time for i.i.d. random input points from an extremely
broad class of probability distributions. The ideas for the
algorithms also have application in computing minimum spanning trees
in coordinate spaces; if time permits I will describe such an
application.
******** Time and place: Oct. 20, 12:30 pm in MJ352 (Bldg. 460) *******
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Regular AFLB meetings are on Thursdays, at 12:30pm, in MJ352 (Bldg.
460).
If you have a topic you would like to talk about in the AFLB seminar
please tell me. (Electronic mail: broder@su-score.arpa, Office: Jacks
Hall 325, 497-1787) Contributions are wanted and welcome. Not all
time slots for the autumn quarter have been filled so far.
For more information about future AFLB meetings and topics you might
want to look at the file [SCORE]<broder>aflb.bboard .
- Andrei Broder
-------
∂17-Oct-83 0120 LAWS@SRI-AI.ARPA AIList Digest V1 #79
Received: from SRI-AI by SU-AI with TCP/SMTP; 17 Oct 83 01:19:42 PDT
Date: Sunday, October 16, 1983 10:13PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #79
To: AIList@SRI-AI
AIList Digest Monday, 17 Oct 1983 Volume 1 : Issue 79
Today's Topics:
AI Societies - Bledsoe Election,
AI Education - Videotapes & Rutgers Mini-Talks,
Psychology - Intuition & Conciousness
----------------------------------------------------------------------
Date: Fri 14 Oct 83 08:41:39-CDT
From: Robert L. Causey <Cgs.Causey@UTEXAS-20.ARPA>
Subject: Congratulations Woody!
[Reprinted from the UTexas-20 bboard.]
Woody Bledsoe has been named president-elect of the American
Association of Artificial Intelligence. He will become
president in August, 1984.
According to the U.T. press release Woody said, "You can't
replace the human, but you can greatly augment his abilities."
Woody has greatly augmented the computer's abilities. Congratulations!
------------------------------
Date: 12 Oct 83 12:59:24-PDT (Wed)
From: ihnp4!hlexa!pcl @ Ucb-Vax
Subject: AI (and other) videotapes to be produced by AT&T Bell
Laboratories
Article-I.D.: hlexa.287
[I'm posting this for someone who does not have access to netnews.
Send comments to the address below; electronic mail to me will be
forwarded. - PCL]
AT&T Bell Laboratories is planning to produce a
videotape on artificial intelligence that concentrates
on "knowledge representation" and "search strategies"
in expert systems. The program will feature a Bell
Labs prototype expert system called ACE.
Interviews of Bell Labs developers will provide the
content. Technical explanations will be made graphic
with computer generated animation.
The tape will be sold to colleges and industry by
Hayden Book Company as part of a software series.
Other tapes will cover Software Quality, Software
Project Management and Software Design Methodologies.
Your comments are welcome. Write to W. L. Gaddis,
Senior Producer, Bell Laboratories, 150 John F. Kennedy
Parkway, Room 3L-528, Short Hills, NJ 07078
------------------------------
Date: 16 Oct 83 22:42:42 EDT
From: Sri <Sridharan@RUTGERS.ARPA>
Subject: Mini-talks
Recently two notices were copied from the Rutgers bboard to Ailist.
They listed a number of "talks" by various faculty back to back.
Those who wondered how a talk could be given in 10 minutes and
those who wondered why a talk would be given in 10 minutes may
be glad to know the purpose of the series. This is the innovative
method that has been designed by the CS graduate students society
for introducing to new graduate students and new faculty members
the research interests of the CS faculty. Each talk typically outlined
the area of CS and AI of interest to the faculty member, discussed
research opportunities and the background (readings, courses) necessary
for doing research in that area.
I have participated in this mini-talk series for several years and
have found it valuable to myself as a speaker. To be given about 10 min
to say what I am interested in, does force me distill thoughts and to
say it simply. The feedback from students is also positive.
Perhaps you will hear some from some of the students too.
------------------------------
Date: 11 Oct 83 2:44:12-PDT (Tue)
From: harpo!utah-cs!shebs @ Ucb-Vax
Subject: Re: the Halting problem.
Article-I.D.: utah-cs.1985
I share your notion (that human ability is limited, and that machines
might actually go beyond man in "consciousness"), but not your confidence.
How do you intend to prove your ideas? You can't just wait for a fantastic
AI program to come along - you'll end up right back in the Turing Test
muddle. What *is* consciousness? How can it be characterized abstractly?
Think in terms of universal psychology - given a being X, is there an
effective procedure (used in the technical sense) to determine whether
that being is conscious? If so, what is that procedure?
AI is applied philosophy,
stan the l.h.
utah-cs!shebs
ps Re rational or universal psychology: a professor here observed that
it might end up with the status of category theory - mildly interesting
and all true, but basically worthless in practice... Any comments?
------------------------------
Date: 12 Oct 83 11:43:39-PDT (Wed)
From: decvax!cca!milla @ Ucb-Vax
Subject: Re: the Halting problem.
Article-I.D.: cca.5880
Of course self-awareness is real. The point is that self-awareness
comes about BECAUSE of the illusion of consciousness. If you were
capable of only very primitive thought, you would be less self-aware.
The greater your capacity for complex thought, the more you perceive
that your actions are the result of an active, thinking entity. Man,
because of his capacity to form a model of the world in his mind, is
able to form a model of himself. This all makes sense from a purely
physical viewpoint; there is no need for a supernatural "soul" to
complement the brain. Animals appear to have some self-awareness; the
quantity depends on their intelligence. Conceivably, a very advanced
computer system could have a high degree of self-awareness. As with
consciousness, it is lack of information -- how the brain works, random
factors, etc. which makes self-awareness seem to be a very special
quality. In fact, it is a very simple, unremarkable characteristic.
M. Massimilla
------------------------------
Date: 12 Oct 83 7:16:26-PDT (Wed)
From: harpo!eagle!mhuxi!mhuxl!ulysses!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: Physics and Intuition
Article-I.D.: ncsu.2367
I intend this to be my final word on the matter. I intend it to be
brief: as someone said, a bit more tolerance on this group would help.
From Laura we have a wonderful story of the intermeshing of physics and
religion. Well, I picked molecular physics for its avoidance of any
normal life experiences. Cosmology and creation are not in that catagory
quite so strongly because religion is an everyday thing and will lead to
biases in cosmological theories. Clearly there is a continuum from
things which are divorced from everyday experience to those that are
very tightly connected to it. My point is that most "hard" sciences
are at one end of the continuum while psychology is clearly way over
at the other end, by definition. It is my position that the rather
big difference between the way one can think about the two ends of the
spectrum suggests that what works well at one end may well be quite
inappropriate at the other. Or it may work fine. But there is a burden
of proof that I hand off to the rational psychologists before I will
take them more seriously than I take most psychologists. I have the same
attitude towards cosmology. I find it patently ludicrous that so many
people push our limited theories so far outside the range of applicability
and expect the extrapolation to be accurate. Such extrapoloation is
an interesting way to understand the failing of the theories, but to
believe that DOES require faith without substantiation.
I dislike being personal, but Laura is trying to make it seem black and
white. The big bang has hardly been proved. But she seems to be saying
it has. It is of course not so simple. Current theories and data
seem to be tipping the scales, but the scales move quite slowly and will
no doubt be straightened out by "new" work 30 years hence.
The same is true of my point about technical reasoning. Clearly no
thought can be entirely divorced from life experiences without 10
years on a mountain-top. Its not that simple. That doesn't mean that
there are not definable differences between different ways of thinking
and that some may be more suitable to some fields. Most psychologists
are quite aware of this problem (I didn't make it up) and as a result
purely experimental psychology has always been "trusted" more than
theorizing without data. Hard numbers give one some hope that it is
the world, not your relationship with a pet turtle speaking in your
work.
If anyone has anymore to say to me about this send me mail, please.
I suspect this is getting tiresome for most readers. (its getting
tiresome for me...) If you quote me or use my name, I will always
respond. This network with its delays is a bad debate forum. Stick to
ideas in abstration from the proponent of the idea. And please look
for what someone is trying to say before assuming thay they are blathering.
----GaryFostel----
------------------------------
Date: 14 Oct 83 13:43:56 EDT (Fri)
From: Paul Torek <flink%umcp-cs@CSNet-Relay>
Subject: consciousness and the teleporter
From Michael Condict ...!cmcl2!csd1!condict
This, then, is the reason I would never step into one of those
teleporters that functions by ripping apart your atoms, then
reconstructing an exact copy at a distant site. [...]
In spite of the fact that consciousness (I agree with the growing chorus) is
NOT an illusion, I see nothing wrong with using such a teleporter. Let's
take the case as presented in the sci-fi story (before Michael Condict rigs
the controls). A person disappears from (say) Earth and a person appears at
(say) Tau Ceti IV. The one appearing at Tau Ceti is exactly like the one
who left Earth as far as anyone can tell: she looks the same, acts the same,
says the same sort of things, displays the same sort of emotions. Note that
I did NOT say she is the SAME person -- although I would warn you not too
conclude too hastily whether she is or not. In my opinion, *it doesn't
matter* whether she is or not.
To get to the point: although I agree that consciousness needs something to
exist, there *IS* something there for it -- the person at Tau Ceti. On
what grounds can anyone believe that the person at Tau Ceti lacks a
consciousness? That is absurd -- consciousness is a necessary concomitant
of a normal human brain. Now there IS a question as to whether the
conscious person at Tau Ceti is *you*, and thus as to whether his mind
is *your* mind. There is a considerable philosophical literature on this
and very similar issues -- see *A Dialogue on Personal Identity and
Immortality* by John Perry, and "Splitting Self-Concern" by Michael B. Green
in *Pacific Philosophical Quarterly*, vol. 62 (1981).
But in my opinion, there is a real question whether you can say whether
the person at Tau Ceti is you or not. Nor, in my opinion, is that
question really important. Take the modified case in which Michael Condict
rigs the controls so that you are transported, yet remain also at Earth.
Michael Condict calls the one at Earth the "original", and the one at Tau
Ceti the "copy". But how do you know it isn't the other way around -- how
do you know you (your consciousness) weren't teleported to Tau Ceti, while
a copy (someone else, with his own consciousness) was produced at Earth?
"Easy -- when I walk out of the transporter room at Earth, I know I'm still
me; I can remember everything I've done and can see that I'm still the same
person." WRONGO -- the person at Tau Ceti has the same memories, etc. I
could just as easily say "I'll know I was transported when I walk out of the
transporter room at Tau Ceti and realize that I'm still the same person."
So in fairness, we can't say "You walk out of the transporter room at both
ends, with the original you realizing that something went wrong." We have
to say "You walk out of the transporter at both ends, with *the one at
Earth* realizing something is wrong." But wait -- they can't BOTH be you --
or can they? Maybe neither is you! Maybe there's a continuous flow of
"souls" through a person's body, with each one (like the "copy" at Tau Ceti
(or is it at Earth)) *seeming* to remember doing the things that that body
did before ...
If you acknowledge that consciousness is rooted in the physical human brain,
rather than some mysterious metaphysical "soul" that can't be seen or
touched or detected in any way at all, you don't have to worry about whether
there's a continuous flow of consciousnesses through your body. You don't
have to be a dualist to recognize the reality of consciousness; in fact,
physicalism has the advantage that it *supports* the commonsense belief that
you are the same person (consciousness) you were yesterday.
--Paul Torek, U of MD, College Park
..umcp-cs!flink
------------------------------
End of AIList Digest
********************
∂17-Oct-83 0221 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #38
Received: from SU-SCORE by SU-AI with TCP/SMTP; 17 Oct 83 02:21:15 PDT
Date: Sunday, October 16, 1983 11:25AM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #38
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Monday, 17 Oct 1983 Volume 1 : Issue 38
Today's Topics:
Implementations - User Convenience Vs. Elegance
& findall Vs. bagof,
Solutions - The `is←all' Predicate
----------------------------------------------------------------------
Date: 14 Oct 83 1237 PDT
From: Dick Gabriel <RPG@SU-AI>
Subject: Elegance and Logical Purity
In the Lisp world, as you know, there are 2 Lisps that serve as
examples for this discussion: T and Common Lisp. T is based on
Scheme and, as such, it is relatively close to a `pure' Lisp or
even a lambda-calculus-style Lisp. Common Lisp is a large,
`user-convenient' Lisp. What are the relative successes of these
two Lisps ? T appeals to the few, me included, while Common Lisp
appeals to the many. The larger, user-convenient Lisps provide
programmers with tools that help solve problems, but they don't
dictate the style of the solutions.
Think of it this way: When you go to an auto mechanic and you
see he has a large tool chest with many tools, are you more or
less confident in him than if you see he has a small tool box
with maybe 5 tools ? Either way our confidence should be based
on the skill of the mechanic, but we expect a skilfull mechanic
with the right tools to be more efficient and possibly more
accurate than the mechanic who has few tools, or who merely has
tools and raw materials for making further tools.
One could take RPLACA as an analog to a user-convenience in this
situation. We do not need RPLACA: it messes up the semantics, and
we can get around it with other, elegant and pure devices. However,
RPLACA serves user convenience by providing an efficient means of
accomplishing an end. In supplying RPLACA, I, the implementer,
have thought through what the user is trying to do. No user would
appreciate it if I suggested that I knew better than he what he is
doing and to propose he replace all list structure that he might
wish to use with side-effect with closures and to then hope for
a smarter compiler someday.
I think it shows more contempt of users' abilities to dictate a
solution to him in the name of `elegance and logical purity' than
for me to think through what he wants for him.
I am also hesitant to foist on people systems or languages that
are so elegant and pure that I have trouble explaining it to users
because I am subject to being ``muddled about them myself.''
Maybe it is stupid to continue down the Lisp path, but Lisp is the
second oldest lanuage (to FORTRAN), and people clamor to use it.
Recall what Joel Moses said when comparing APL with Lisp.
APL is perfect; it is like a diamond. But like a diamond
you cannot add anything to it to make it more perfect, nor
can you add anything to it and have it remain a diamond.
Lisp, on the other hand, is like a ball of mud. You can add
more mud to it, and it is still a ball of mud.
I think user convenience is like mud.
-rpg-
------------------------------
Date: Fri 14 Oct 83 08:26:56-PDT
From: Pereira@SRI-AI
Subject: More on findall Vs. bagof
Consider the following code:
p(S) :- q(a,X), r(X,S).
u(S) :- r(X,S), q(a,X).
r(X,S) :- findall(Y,a(X,Y),S).
q(X,X).
a(a,1).
a(b,2).
a(a,3).
?- p(S). will return S=[2], whereas ?- u(S) will return S=[1,2,3].
This just because the two goals q and r where exchanged ! Is this
the kind of behavior one should expect from a logic programming
system ? Of course not! The problem is similar to the one of using
\+P when P has unbound variables.
In contrast, if findall is replaced by bagof in the example,
both queries will return S=[2] as they should.
-- Fernando Pereira
------------------------------
Date: Fri 14 Oct 83 14:49:03-PDT
From: Vivek Sarkar <JLH.Vivek@SU-SIERRA>
Subject: The `is←all' Predicate (Problem Posed by Bijan Arbab)
One way of defining is←all in "pure" Prolog is as follows:
/* is←all( A, Q ) asserts that A is the list of all distinct
terms that satisfy Q; assume that Q is unary */
is←all( A, Q ) :- is←all←but( A, Q, [] ) .
/* is←all←but( A, Q, X ) asserts that A is the list of all
terms that satisfy Q, but are NOT in list X. If X is empty
then A is the list of all terms that satisfy Q, which is
what we want. */
is←all←but( A, Q, X ) :- A = [ H | T ],
Q( H ),
not←in( H, X),
is←all←but( T, Q, [ H | X ] ) .
is←all←but( [], Q, X).
/* If the previous clause fails then A must be empty. */
not←in( H, [ Hx | Tx ] ) :- not( H = Hx ), not←in( H, Tx ) .
not←in( H, [] ).
So, the list of all terms that satisfy a predicate can be obtained
by carrying around a partial list of generated terms, and using it
to ensure that the next term is distinct from all previous terms.
Obviously, this solution is slower than one which uses assert's,
or a similar global memory mechanism (E.g. writing onto a file).
It is slower because, in generating the ith instantiation (say)
produced by Q, we start at the beginning and call Q i times,
rather than continuing from the (i-1)th instantiation. I'd be
interested to know of a ``pure'' Prolog solution, which avoids this
extra computation.
-- Vivek Sarkar
------------------------------
Date: Sat, 15 Oct 83 14:30:07 PDT
From: Bijan Arbab <v.Bijan@UCLA-LOCUS>
Subject: The Proposed Solution For `is-all'
Thank you for sending me the solution, However I believe there
are two important points that need to pointed out about your
solution and in general about all solutions that are along the
same path.
1. What if there are two assertions in the world of the same
type ?! I.e.
p(a)
p(b)
p(c)
p(a)
The correct answer would have to be the list `a.b.c.a.nil',
your solution will return the list `a.b.c.nil'.
2. If you are thinking about solving the problem, please note
the following:
in the goal is-all(A,Y,Q) Q can be a complex goal consisting
of conjunction of other goals and we are interested in the Y
terms that accrue in Q only. E.g. if the goal is
is-all(A,F,p(A)&p(A,B,C)&p(A,C,F))
Then list A is collection of all F's such that
p(A)&p(A,B,C)&p(A,C,F) is true.
In general I believe the pure and efficient solution to this problem
would involve functions that are not currently available in prolog,
namely Data Abstraction. Hideyuki Nakashima and Norihisa Suzuki have
written a paper on the subject that appeared in New Generation
Computing which is An International Journal on Fifth Generation
Computers.
In short, for our problem, we need a way of naming a goal and then
asking for the next solution to the goal whenever we need one. The
NPO function proposed in that paper would do the job. But
unfortunately it is not available in any implementation of Prolog.
I do not see an easy way of building one either !
-- Bijan
------------------------------
End of PROLOG Digest
********************
∂17-Oct-83 1541 SCHMIDT@SUMEX-AIM.ARPA LM-2 unavailable Tuesday morning (10/18)
Received: from SUMEX-AIM by SU-AI with TCP/SMTP; 17 Oct 83 15:41:01 PDT
Date: Mon 17 Oct 83 15:42:39-PDT
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM.ARPA>
Subject: LM-2 unavailable Tuesday morning (10/18)
To: HPP-Lisp-Machines@SUMEX-AIM.ARPA
Tomorrow (Tuesday, 10/18) we will be moving the LM-2 to a different
part of the room to make room for the 3600's. "Again?," you ask. Yes.
Last time we couldn't carry out the plan because Payless was out of 208 v
extension cords, and we had to special order one. (Actually, Tom Dienstbier
made one up.) This should all take place between 9 am and 11 am or so.
--Christopher
-------
∂18-Oct-83 0219 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #39
Received: from SU-SCORE by SU-AI with TCP/SMTP; 18 Oct 83 02:19:31 PDT
Date: Monday, October 17, 1983 7:59PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #39
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Tuesday, 18 Oct 1983 Volume 1 : Issue 39
Today's Topics:
Implementations - findall Vs. bagof & `is←all' Predicate
User Convenience Vs. Elegance
----------------------------------------------------------------------
Date: Mon 17 Oct 83 08:59:49-PDT
From: Pereira@SRI-AI
Subject: Mixup
In my latest "findall vs. bagof" example, I got my 'a's and 'b's mixed
up. The reply to the query is S = [1,3] with bagof, and S = [1,2,3]
or S = [1,3] with findall depending on the order of goals. Sorry for
the confusion. Thanks to Paul Broome for pointing this out.
With respect to the discussion of "is←all in pure Prolog", it is clear
that the general "is←all" with complex goals is rather tricky to
implement because all the abstracted variables need to be replaced by
fresh variables each time round the recursion. I'm still confused by
this discussion, however. What is wrong with setof/bagof ? Also,
what is the relevance of abstract datatypes to this problem ?
-- Fernando Pereira
------------------------------
Date: Mon 17 Oct 83 09:04:05-PDT
From: Pereira@SRI-AI
Subject: Reply to Dick Gabriel
I didn't explain myself well, I see now. The "impure" parts of Common
Lisp are the result of the codification of 20 years of Lisp practice
by 1000s of practicioners. Most of these impurities are there for good
reason, to circumvent limitations of the language or of our ability to
design smart compilers (E.g. compilers that transform inefficient
copying into efficient replacement). However, the situation with
Prolog is very different: only recently has Prolog started being used
by a sizeable community, and many of the impurities in Prolog are
"quick and dirty" solutions to problems the original impementers could
not afford to think through. Given that there has been much less
exploration of the Prolog "problem space" than is the case for Lisp,
it is more likely that principled solutions can be found for problems
that currently are solved in a "ad hoc" way in Prolog. That's why I
think that attempting to enshrine today's poor Prolog practices will
in the long term be detrimental to the good health of Prolog and logic
programming. I was not suggesting that people should stop using
"impure" features when they need them to get the job done (that would
be the authoritarian, "diamond" approach). I just intended to say that
expediency should not be promoted to principle.
With respect to the specific issues under discussion, I don't know of
anybody who has thought through the question of failure vs. error in
builtin procedures, or the question of what kinds of database
modification are suitable for different tasks in Prolog. I would
certainly be disappointed if the current mess in the Prolog systems I
know (and which I helped to create...) were to be perpetuated or
exchanged for "ad hoc" solutions slavishly copied from other languages
without concern for the differences between those languages and
Prolog. The approach I wish Prolog implementers would take is to some
extent that of the Common Lisp effort, but helped by the better
theoretical understanding which I am sure it is possible to achieve
given the nature of Prolog and the lack of effort in this area.
-- Fernando Pereira
------------------------------
End of PROLOG Digest
********************
∂18-Oct-83 0905 GOLUB@SU-SCORE.ARPA Lunch
Received: from SU-SCORE by SU-AI with TCP/SMTP; 18 Oct 83 09:05:37 PDT
Date: Tue 18 Oct 83 09:06:52-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Lunch
To: faculty@SU-SCORE.ARPA
Richard Brent will be our guest for lunch today. GENE
-------
∂18-Oct-83 0913 GOLUB@SU-SCORE.ARPA Library Keys
Received: from SU-SCORE by SU-AI with TCP/SMTP; 18 Oct 83 09:12:54 PDT
Date: Tue 18 Oct 83 09:13:22-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Library Keys
To: faculty@SU-SCORE.ARPA
cc: LIBRARY@SU-SCORE.ARPA
The Math Sciences Library is about to institute a policy of not
giving keys to the Faculty. It is claimed many books have disappeared but
of course no one has any idea if this occurs during the day when attendees
are there or at night. Of course they could lock the doors all the time
and have no shortages!
I am strongly opposed to this policy. It is the heavy hand of bureaucracy
and it is solving the problem by penalizing those who have not been the
offenders. If you feel similiarly, send a message to LIBRARY@SCORE.
GENE
-------
As a protest against the new key policy, I do not intend to use the
CS library for six months. If I can think of some other way to protest,
I'll do that also.
Gene: The bulletin board message announcing the new policy says that
a majority in each department favor it. I have missed quite a few
meetings, but your message protesting the policy suggests that our
department didn't vote for it. Otherwise, your protest would properly
be directed also at the department.
∂18-Oct-83 1022 LIBRARY@SU-SCORE.ARPA Library Key Policy
Received: from SU-SCORE by SU-AI with TCP/SMTP; 18 Oct 83 10:22:15 PDT
Date: Tue 18 Oct 83 10:22:37-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Library Key Policy
To: faculty@SU-SCORE.ARPA
cc: herriot@SU-SCORE.ARPA, cottle@SU-SIERRA.ARPA
Over the past four years, I have attempted to gather as much input from the
department concerning the key policy as possible. I have had responses from
faculty who did not want to give up keys and from faculty who felt keys should
be take up. In March of 1981 a survey was sent out to all faculty concerning
this issue. More recently I talked about the library committee's discussions
of this problem in the May faculty meeting and presented the issue on the
bulletin board. In all these cases, I from the replys I did receive, more
expressed concern over security to the point of giving up of keys. I want
the faculty to know that this was a very difficult decision for the library
committee and for me. We are here to serve your research needs and if this
policy will adversely affect your research, it should be brought to the
attention of the library committee. However, the losses we are experiencing
are also having an impact on faculty and graduate student research. Library
staff is also required to spend more time on ways to obtain books for
researchers when the book is lost.
Gene's question is a good one as to are books being stolen at night. Of
course it is impossible to give you statistics on the when, who and what
concerning the stealing of books. However I have documented a few incidents
and they were all at night. On August 2, 1979, I documented an incident
when a former student in philosophy who we were not allowing to check out
books during the day was allowed in at night by someone who had a key and
who checked out books which were at that point virtually lost books.
On December 13, 1979, I arrived at the library early to find the door open
and library equipment stolen and in disarray. On May 9, 1980, I documented
an incident involving a law student who was allowed in at night and a staff
member who happened to be here after hours caught him walking out with
materials without checking them out. The next week I was called by the
law library concerning a stack of computer science material which they
found in their library. All the material had been taken from the library
without being checked out. I have stopped patrons many times from allowing
outsiders in the library. This is very similar to the problem the
department has with outsiders getting into Margaret Jacks. At that point,
I took an inventory of the library of congress collection and we found
2,567 books missing or 15% of the collection. This impacts on computer
science the heaviestscience the heaviest.
Last spring when this was on the bulletin board, several graduate students
came by my office without placing their opinions on the bulletin board
to let me know that the loss of material was such a problem that they
would want to give up keys. Library staff is experiencing a high level
of frustration when they can not provide needed materials to faculty and
graduate students because of our losses. One final factor is the rule
that the giving out of keys can not be restricted to departments only to
class of users. Because we have computer science, mathematics, and
statistics material in our collection we do have faculty and graduate
students from other departments requesting keys. I hope this information
gives you a feel for the complexity of the problem. Please let me and
the committee know how you feel.
Harry Llull
-------
Further idea. When we get a building let's plan for a separate CS
library and make sure we get librarians who support a policy of
keeping keys available.
∂18-Oct-83 1131 pratt%SU-NAVAJO.ARPA@SU-SCORE.ARPA security
Received: from SU-SCORE by SU-AI with TCP/SMTP; 18 Oct 83 11:31:39 PDT
Received: from Navajo by Score with Pup; Tue 18 Oct 83 11:31:20-PDT
Date: Tue, 18 Oct 83 11:31 PDT
From: Vaughan Pratt <pratt@Navajo>
Subject: security
To: library@score
Cc: faculty@score
At 15% losses it would indeed seem that more heavy-handed security measures
are called for.
-v
∂18-Oct-83 1450 @SU-SCORE.ARPA:JMC@SU-AI bureaucrary wins
Received: from SU-SCORE by SU-AI with TCP/SMTP; 18 Oct 83 14:49:53 PDT
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Tue 18 Oct 83 14:49:53-PDT
Date: 18 Oct 83 1446 PDT
From: John McCarthy <JMC@SU-AI>
Subject: bureaucrary wins
To: faculty@SU-SCORE
I strongly resent our representative on the library committee voting
for abolishing library keys.
∂18-Oct-83 2257 @SU-SCORE.ARPA:JMC@SU-AI
Received: from SU-SCORE by SU-AI with TCP/SMTP; 18 Oct 83 22:57:31 PDT
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Tue 18 Oct 83 22:58:27-PDT
Date: 18 Oct 83 2256 PDT
From: John McCarthy <JMC@SU-AI>
To: faculty@SU-SCORE
CC: library@SU-SCORE
Further idea. When we get a building let's plan for a separate CS
library and make sure we get librarians who support a policy of
keeping keys available.
∂18-Oct-83 2254 @SU-SCORE.ARPA:JMC@SU-AI
Received: from SU-SCORE by SU-AI with TCP/SMTP; 18 Oct 83 22:54:21 PDT
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Tue 18 Oct 83 22:55:36-PDT
Date: 18 Oct 83 2253 PDT
From: John McCarthy <JMC@SU-AI>
To: library@SU-SCORE
CC: faculty@SU-SCORE
As a protest against the new key policy, I do not intend to use the
CS library for six months. If I can think of some other way to protest,
I'll do that also.
∂19-Oct-83 0818 LIBRARY@SU-SCORE.ARPA Reply to McCarthy and Keller concerning Library Services
Received: from SU-SCORE by SU-AI with TCP/SMTP; 19 Oct 83 08:18:36 PDT
Date: Wed 19 Oct 83 08:19:43-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Reply to McCarthy and Keller concerning Library Services
To: su-bboards@SU-SCORE.ARPA
cc: jmc@SU-AI.ARPA, ark@SU-AI.ARPA, cottle@SU-SIERRA.ARPA,
herriot@SU-SCORE.ARPA, faculty@SU-SCORE.ARPA
I felt I should address the issues Prof. McCarthy and Prof. Keller have
raised and if the department as a whole feels the way the do I will
difintely want to address this issue of library services. The key issue
will be addressed with the library committee.
In regards to library services, one of my main goals when I came was to
expand the information services of the library. During those four
years I have done the following: videotapes of computer science classes
are offered for viewing in the library; I offer bibliographic instruction
and orientations to new students, I have had 50 new students attending
each year; computerized literature searching and demonstrations;
communicating with the department through score, sierra etc for reference
questions, overdues, recalls etc; increased reference services in the
library; new technical reports list online (and we hope to have the whole
file online). Some very specific decisions I made with computer science
in mind because the former math library had a different policy was to
allow graduate reserves out over night and not to send overdues or fines
on technical reports and reserves unless a user refused to bring material
back for another patron. These are the types of decisions I have implemented
that users often take for granted. If you have been positively impacted
by these decisions let me know.
In reference to our losses, to lessen the impact I have borrowed books
from Berkeley on my name in order to get the material to the patron quickly.
Within the past to months I have had to request over 30 items because we
were not able to find them in the library. It would have been much easier
just to tell the patron the material is not here or it is lost and turn
my back. However that is not how I operate. When I or my staff find
that something a patron needs is lost, we drop everything and go to any
lengths to get it quickly. From the reaction of some people in the
department it might have been better if I had taken that approach. However
the person who would have suffered would have been the graduate student
or researcher who needed the information then.
I have talked with some of you concerning the lack of staffing and increase
of use in the Math/CS Library. I try not to overdo on this because I am
very aware that the department is having its own struggles for space, staff,
and money. But you need to be aware that as the deparment grows the use
of the library also increases. In addition, as the impact of computers
impacts all of society we are having more and more people needing help
in the area of their information needs. We are the Math/CS Library but
we are required to serve the information needs of all the Stanford
community in the areas of computer science, mathematics, and statistics.
(By the way, all of Silicon Valley thinks we should serve them also but
we have to stop somewhere, I guess that is another example of bureaucracy?)
Prof. McCarthy suggests that we should turn to computers to solve this problem.
On this issue, I agree with him. However computers will help libraries give
better service, not put them out of business. I will like to encourage all
users to communicate with us through SCORE, SAIL, etc. I have tried to
announce new books, call for papers and conference dates, technical reports,
new journals etc. on the bulletin board. We can save you time if you
use the electronic mail for various library questions you have. It will
also help us in structuring our day when working on questions that come
through the electronic mail.
If you honestly feel that services have diminished, I want to hear from you
and I want to attempt to make our services more effective. However, I
would like to address this question with specific examples. In the past,
a few people often got very personalized service. My goal is to try to
keep as much of the personalized service as possible but not to waste
staff time on overdues, deliverying books personally etc. Instead I have
staff working on new technical reports exchange agreements, monitoring
the journals and conferences and making sure we get them as fast as possible,
ordering dissertations, books, and technical reports on demand, using
computerized databases for answering reference questions, videotapes.
Thank you in advance for your input.
Harry Llull
-------
∂19-Oct-83 0937 @SU-SCORE.ARPA:JMC@SU-AI
Received: from SU-SCORE by SU-AI with TCP/SMTP; 19 Oct 83 09:36:55 PDT
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Wed 19 Oct 83 09:36:10-PDT
Date: 19 Oct 83 0933 PDT
From: John McCarthy <JMC@SU-AI>
To: faculty@SU-SCORE
CC: library@SU-SCORE
As far as I can see, the latest message
from the Harry Llull Library is a mere advertisement
proposing an expansion of the empire
and an obfuscation. It is what we usually get
when bureaucracies feel nervous. It in no way addresses
the matter of keys which is the only issue I raised and
about which there is probably little new that can be said.
I still think we should plan a Computer Science Library in
our new building whenever that becomes a real possibility.
∂19-Oct-83 1003 GOLUB@SU-SCORE.ARPA Thanks
Received: from SU-SCORE by SU-AI with TCP/SMTP; 19 Oct 83 10:02:54 PDT
Date: Wed 19 Oct 83 10:04:08-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Thanks
To: FACULTY@SU-SCORE.ARPA
cc: LIBRARY@SU-SCORE.ARPA
Despite my dispute with Harry Llull on the key issue , I want to say
I am very satisfied with the library administration. Everyone is extremely
helpful and great effort is made to obtain whatever I need.
GENE
-------
∂19-Oct-83 1004 cheriton%SU-HNV.ARPA@SU-SCORE.ARPA Re: Reply to McCarthy and Keller concerning Library Services
Received: from SU-SCORE by SU-AI with TCP/SMTP; 19 Oct 83 10:04:44 PDT
Received: from Diablo by Score with Pup; Wed 19 Oct 83 10:05:20-PDT
Date: Wed, 19 Oct 83 10:04 PDT
From: David Cheriton <cheriton@Diablo>
Subject: Re: Reply to McCarthy and Keller concerning Library Services
To: LIBRARY@SU-Score, su-bboards@SU-Score
Cc: ark@Sail, cottle@SU-SIERRA.ARPA, faculty@SU-Score, herriot@SU-Score,
jmc@Sail
I have generally quite pleased with the library and feel some comments
being made are unfair. In my experience, computer science libraries
have an especially difficult time with theft of usually the most
valuable (from a reference standpoint) material. That is, the latest
conference publkications are stolen before the 1964 OS/360 JCL manual.
I dont care whether the lost rate is 5 percent or 15 percent if
everything I am interested in has been stolen.
Surely, we are collectively concerned about access to materials.
This is reduced by theft as well as tighter security. I would hope that
CSD and the library admin. can agree on a solution that is optimal
in terms of access.
P.S. How about appointing JMC and ARK as "honorary librarians" so they
can have keys?
∂19-Oct-83 1608 SCHREIBER@SU-SCORE.ARPA Library
Received: from SU-SCORE by SU-AI with TCP/SMTP; 19 Oct 83 16:08:33 PDT
Date: Wed 19 Oct 83 16:09:29-PDT
From: Robert Schreiber <SCHREIBER@SU-SCORE.ARPA>
Subject: Library
To: faculty@SU-SCORE.ARPA
Richard Manuck and Harry Llull have both been extremely helpful to me;
and they are really good at their jobs. So I propose that we hire them
as librarians in the new Computer Science Building Library!
Rob
-------
∂19-Oct-83 1611 SCHREIBER@SU-SCORE.ARPA NA Seminar
Received: from SU-SCORE by SU-AI with TCP/SMTP; 19 Oct 83 16:10:59 PDT
Date: Wed 19 Oct 83 16:11:54-PDT
From: Robert Schreiber <SCHREIBER@SU-SCORE.ARPA>
Subject: NA Seminar
To: faculty@SU-SCORE.ARPA
I would like to invite you to the NA Seminar that I will give on
October 31. I will talk about my research on systolic arrays in
numerical computation.
Rob
-------
∂19-Oct-83 1622 @SU-SCORE.ARPA:TOB@SU-AI
Received: from SU-SCORE by SU-AI with TCP/SMTP; 19 Oct 83 16:22:08 PDT
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Wed 19 Oct 83 16:22:46-PDT
Date: 19 Oct 83 1620 PDT
From: Tom Binford <TOB@SU-AI>
To: faculty@SU-SCORE
Both Harry and Richard have been helpful to me. They have gone out
of their way to get proceedings for me. I think they are both
very competent, concerned and conscientious.
While I want to argue with them and others over the key issue,
I respect and appreciate their work.
One issue in the key issue is that keyholders frequently let in
unauthorized people. This is the same problem we have at MJH.
It would be a serious cultural change for people to be strict
about not letting other people in.
∂19-Oct-83 1608 SCHREIBER@SU-SCORE.ARPA Library
Received: from SU-SCORE by SU-AI with TCP/SMTP; 19 Oct 83 16:08:33 PDT
Date: Wed 19 Oct 83 16:09:29-PDT
From: Robert Schreiber <SCHREIBER@SU-SCORE.ARPA>
Subject: Library
To: faculty@SU-SCORE.ARPA
Richard Manuck and Harry Llull have both been extremely helpful to me;
and they are really good at their jobs. So I propose that we hire them
as librarians in the new Computer Science Building Library!
Rob
-------
∂19-Oct-83 2305 @SU-SCORE.ARPA:FEIGENBAUM@SUMEX-AIM.ARPA Re: Library
Received: from SU-SCORE by SU-AI with TCP/SMTP; 19 Oct 83 23:05:16 PDT
Received: from SUMEX-AIM.ARPA by SU-SCORE.ARPA with TCP; Wed 19 Oct 83 23:06:32-PDT
Date: Wed 19 Oct 83 23:07:33-PDT
From: Edward Feigenbaum <FEIGENBAUM@SUMEX-AIM.ARPA>
Subject: Re: Library
To: SCHREIBER@SU-SCORE.ARPA, faculty@SU-SCORE.ARPA
In-Reply-To: Message from "Robert Schreiber <SCHREIBER@SU-SCORE.ARPA>" of Wed 19 Oct 83 16:12:05-PDT
I havn't been following my electronic mail in a while, so I don't know what
all the flap is about. But Harry and Richard are the two best and most helpful
librarians I have ever worked with.
Edf
-------
∂20-Oct-83 0214 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #40
Received: from SU-SCORE by SU-AI with TCP/SMTP; 20 Oct 83 02:14:02 PDT
Date: Wednesday, October 19, 1983 11:02PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #40
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Thursday, 20 Oct 1983 Volume 1 : Issue 40
Today's Topics:
Implementations - User Convenience Vs. Elegance
----------------------------------------------------------------------
Date: 19 October 1983 1952-PDT (Wednesday)
From: Abbott at AEROSPACE (Russ Abbott)
Subject: Purity
In considering the question of Prolog's purity vs. its convenience
for programmers, I wonder how '=..' fits in. As a pure first order
logic system, Prolog disallows variables as predicate names--even
though that would sometimes be very convenient. Yet one can write
Term =.. [Predicate | Arguments],
Term,
in violation of all first order principles. What are the
justifications for these rules ?
------------------------------
Date: Tuesday, 18 October 1983 09:32:25 EDT
From: Joseph.Ginder at CMU-CS-SPICE
Subject: Common Lisp Motivation
Being part of the Common Lisp effort, I would like to express an
opinion about the reasons for the inclusion of so many "impurities" in
Common Lisp that differs from that expressed by Fernando Pereira in
the last Prolog Digest. I believe the reason for including much of
what is now Common Lisp in the Common Lisp specification was an effort
to provide common solutions to common problems; this is as opposed to
making concessions to language limitations or people's (in)ability to
write smart compilers. In particular, the reference to optimizing
"inefficient copying into efficient replacement" does not seem a
legitimate compiler optimization (in the general sense) -- this
clearly changes program semantics. (In the absence of side effects,
this would not be a problem, but note that some side effect is
required to do IO.) For a good statement of the goals of the Common
Lisp effort, see Guy Steele's paper in the 1982 Lisp and Functional
Programming Conference Proceedings.
Let me hasten to add that I agree with Pereira's concern that
expediency not be promoted to principle. It is for this very reason
that language features such as flavors and the loop construct were not
included in the Common Lisp specification -- we determined not to
standardize until concensus could be reached that a feature was both
widely accepted and believed to be a fairly good solution to a common
problem. The goal is not to stifle experimentation, but to promote
good solutions that have been found through previous experience. In
no sense do I believe anyone regards the current Common Lisp language
as the Final Word on Lisp.
Also, I have never interpreted Moses' diamond vs. mud analogy to have
anything to do with authoritarianism, only aesthetics. Do others ?
-- Joe Ginder
------------------------------
End of PROLOG Digest
********************
∂20-Oct-83 1120 ELYSE@SU-SCORE.ARPA Message about Visiting Scholar Cards - from Gene H. golub
Received: from SU-SCORE by SU-AI with TCP/SMTP; 20 Oct 83 11:20:05 PDT
Date: Thu 20 Oct 83 11:20:20-PDT
From: Elyse Krupnick <ELYSE@SU-SCORE.ARPA>
Subject: Message about Visiting Scholar Cards - from Gene H. golub
To: faculty@SU-SCORE.ARPA, secretaries@SU-SCORE.ARPA,
CSD-Administration: ;
Stanford-Phone: (415) 497-9746
The new policy in Computer Science is that all visitors must be coordinated
through the office of the Chairman. The Registrar's office will no longer
honor any Computer Science requests for "Visiting Scholar" Cards from anyone
other than the following:
Betty Scott, Administrative Officer
Carolyn Tajnai, Manager, Computer Forum and CS Development Activities
Marlie Yearwood, Administrative Assistant and Coordinator of CS Visitors
Gene.
-------
∂20-Oct-83 1158 LIBRARY@SU-SCORE.ARPA Math/CS Library and Electronic Messaging
Received: from SU-SCORE by SU-AI with TCP/SMTP; 20 Oct 83 11:58:17 PDT
Date: Thu 20 Oct 83 11:59:25-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Math/CS Library and Electronic Messaging
To: su-bboards@SU-SCORE.ARPA
cc: faculty@SU-SCORE.ARPA
Some of you may not be aware that we are attempting to offer more services
through the electronic messaging system. For example, when checking out
books, requesting technical reports or recalls, you can list your electronic
mailing address for us to notify you when something has been received for
you or when an item is needed back from you for another patron. We also
encourage you to use the messaging system to ask us if a book is available
instead of you walking over to find it is out. This should save you time and
will help us in structuring our work and possibly eliminating quequing at
the desk in the library. At this point, I am not setting any restrictions
concerning the types of questions we can handle. We read mail many times
a day and I hope we can have quick turn around on your questions. If
I see that the volume of questions are more than we can handle, I will
try to come up with criteria on what we can handle based on our staffing.
However when you do list your electronic mailing address also list your
departmental mailing address. This is particularly important for those
of you who have requested technical reports since we have to mail those
reports through ID mail. We will also need your physical mailing address
for those times when the system is down. Much of this work will be done
by students so please list the complete addresses including computer
system your are on with an @ sign indicating it is electronic mail.
For those of your who are new, you will notice from time to time my listing
of selective lists of new books and conference announcements online. If
you should have ideas on other types of information of this nature that
you would find helpful let me know.
Harry
-------
∂20-Oct-83 1541 LAWS@SRI-AI.ARPA AIList Digest V1 #80
Received: from SRI-AI by SU-AI with TCP/SMTP; 20 Oct 83 15:40:51 PDT
Date: Thursday, October 20, 1983 9:23AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #80
To: AIList@SRI-AI
AIList Digest Thursday, 20 Oct 1983 Volume 1 : Issue 80
Today's Topics:
Administrivia - Complaints & Seminar Abstracts,
Implementations - Parallel Production System,
Natural Language - Phrasal Analysis & Macaroni,
Psychology - Awareness,
Programming Languages - Elegance and Purity,
Conferences - Reviewers needed for 1984 NCC,
Fellowships - Texas
----------------------------------------------------------------------
Date: Tue 18 Oct 83 20:33:15-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Complaints
I have received copies of two complaints sent to the author
of a course announcement that I published. The complaints
alleged that the announcement should not have been put out on
the net. I have three comments:
First, such complaints should come to me, not to the original
authors. The author is responsible for the content, but it is
my decision whether or not to distribute the material. In this
case, I felt that the abstract of a new and unique AI course
was of interest to the academic half of the AIList readership.
Second, there is a possibility that the complainants received
the article in undigested form, and did not know that it was
part of an AIList digest. If anyone is currently distributing
AIList in this manner, I want to know about it. Undigested
material is being posted to net.ai and to some bboards, but it
should not be showing up in personal mailboxes.
Third, this course announcement was never formally submitted
to AIList. I picked the item up from a limited distribution,
and failed to add a "reprinted from" or disclaimer line to
note that fact. I apologize to Dr. Moore for not getting in
touch with him before sending the item out.
-- Ken Laws
------------------------------
Date: Tue 18 Oct 83 09:01:29-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Seminar Abstracts
It has been suggested to me that seminar abstracts would be more
useful if they contained the home address (or net address, phone
number, etc.) of the speaker. I have little control over the
content of these messages, but I encourage those who compose them
to include such information. Your notices will then be of greater
use to the scientific community beyond just those who can attend
the seminars.
-- Ken Laws
------------------------------
Date: Mon 17 Oct 83 15:44:52-EDT
From: Mark D. Lerner <LERNER@COLUMBIA-20.ARPA>
Subject: Parallel production systems.
The parallel production system interpreter is running
on the 15 node DADO prototype. We can presently run up
to 32 productions, with 12 clauses in each production.
The prototype has been operational since April 1983.
------------------------------
Date: 18 Oct 1983 0711-PDT
From: MEYERS.UCI-20A@Rand-Relay
Subject: phrasal analysis
Recently someone asked why PHRAN was not based on a grammar.
It just so happens ....
I have written a parser which uses many of the ideas of PHRAN
but which organizes the phrasal patterns into several interlocking
grammars, some 'semantic' and some syntactic.
The program is called VOX (Vocabulary Extension System) and attempts
a 'complete' analysis of English text.
I am submitting a paper about the concepts underlying the system
to COLING, the conference on Computational Linguistics.
Whether or not it is accepted, I will make a UCI Technical Report
out of it.
To obtain a copy of the paper, write:
Amnon Meyers
AI Project
Dept. of Computer Science
University of California,
Irvine, CA 92717
------------------------------
Date: Wednesday, 19 October 1983 10:48:46 EDT
From: Robert.Frederking@CMU-CS-CAD
Subject: Grammars; Greek; invective
One comment and two meta-comments:
Re: the validity of grammars: almost no one claims that grammatical
phenomena don't exist (even Schank doesn't go that far). What the
argument generally is about is whether one should, as the first step
in understanding an input, build a grammatical tree, without any (or
much) information from either semantics or the current
conversational context. One side wants to do grammar first, by
itself, and then the other stuff, whereas the other side wants to try
to use all available knowledge right from the start. Of course, there
are folks taking extreme positions on both sides, and people
sometimes get a bit carried away in the heat of an argument.
Re: Greek: As a general rule, it would be helpful if people who send in
messages containing non-English phrases included translations. I
cannot judge the validity of the Macaroni argument, since I don't
completely understand either example. One might argue that I should
learn Greek, but I think expecting me to know Maori grammatical
classes is stretching things a bit.
Re: invective: Even if the reference to Yahweh was meant as a childhood
opinion which has mellowed with age, I object to statements of the
form "this same wonderful god... tortured and burned..." etc.
Perhaps it was a typo. As we all know, people have tortured and
burnt other people for all sorts of reasons (including what sort of
political/economic systems small Asian countries should have), and I
found the statement offensive.
------------------------------
Date: Wednesday, 19 October 1983 13:23:59 EDT
From: Robert.Frederking@CMU-CS-CAD
Subject: Awareness
As Paul Torek correctly points out, this is a metaphysical question.
The only differences I have with his note are over the use of some difficult
terms, and the fact that he clearly prefers the "physicalist" notion. Let
me start by saying that one shouldn't try to prove one side or the other,
since proofs clearly cannot work: awareness isn't subject to proof. The
evidence consists entirely of internal experiences, without any external
evidence. (Let me warn everyone that I have not been formally trained in
philosophy, so some of my terms may be non-standard.) The fact that this
issue isn't subject to proof does not make it trivial, or prevent it from
being a serious question. One's position on this issue determines, I think,
to a large extent one's view on many other issues, such as whether robots
will eventually have the same legal stature as humans, and whether human life
should have a special value, beyond its information handling abilities, for
instance for euthanasia and abortion questions. (I certainly don't want to
argue about abortion; personally, I think it should be legal, but not treated
as a trivial issue.)
At this point, my version of several definitions is in order. This
is because several terms have been confused, due probably to the
metaphysical nature of the problem. What I call "awareness" is *not*
"self-reference": the ability of some information processing systems (including
people) to discuss and otherwise deal with representations of themselves.
It is also *not* what has been called here "consciousness": the property of
being able to process information in a sophisticated fashion (note that
chemical and physical reactions process information as well). "Awareness"
is the internal experience which Michael Condict was talking about, and
which a large number of people believe is a real thing. I have been
told that this definition is "epiphenominal", in that awareness is not the
information processing itself, but is outside the phenomena observed.
Also, I believe that I understand both points of view; I can argue
either side of the issue. However, for me to argue that the experience of
"awareness" consists solely of a combination of information processing
capabilities misses the "dualist" point entirely, and would require me to
deny that I "feel" the experience I do. Many people in science deny that
this experience has any reality separate from the external evidence of
information processing capabilities. I suspect that one motivation for this
is that, as Paul Torek seems to be saying, this greatly simplifies one's
metaphysics.
Without trying to prove the "dualist" point of view, let me give an
example of why this view seems, to me, more plausible than the
"physicalist" view. It is a variation of something Joseph Weizenbaum
suggested. People are clearly aware, at least they claim to be. Rocks are
clearly not aware (in the standard Western view). The problem with saying
that computers will ever be aware in the same way that people are is that
they are merely re-arranged rocks. A rock sitting in the sun is warm, but
is not aware of its warmth, even though that information is being
communicated to, for instance, the rock it is sitting on. A robot next to
the rock is also warm, and, due to a skillful re-arrangement of materials,
not only carries that information in its kinetic energy, but even has a
temperature "sensor", and a data structure representing its body
temperature. But it is no more aware (in the experiential sense) of what is
going on than the rock is, since we, by merely using a different level of
abstraction in thinking about it, can see that the data structure is just a
set of states in some semiconductors inside it. The human being sitting
next to the robot not only senses the temperature and records it somehow (in
the same sense as the robot does), but experiences it internally, and enjoys
it (I would anyway). This experiencing is totally undetectable to physical
investigation, even when we (eventually) are able to analyze the data
structures in the brain.
An interesting side-note to this is that in some cultures, rocks, trees,
etc., are believed to experience their existance. This is, to me, an
entirely acceptable alternate theory, in which the rock and robot would both
feel the warmth (and other physical properties) they possess.
As a final point, when I consider what I am aware of at any given moment, it
seems to include a visual display, an auditory sensation, and various bits
of data from parts of my body (taste, smell, touch, pain, etc.). There are
many things inside my brain that I am *not* aware of, including the
preprocessing of my vision, and any stored memories not recalled at the
moment. There is a sharp boundary between those things I am aware of and
those things I am not. Why should this be? It isn't just that the high
level processes, whatever they are, have access to only some structures.
They *feel* different from other structures in the brain, whose information
I also have access to, but which I have no feeling of awareness in. It
would appear that there is some set of processing elements to which my
awareness has access. This is the old mind-body problem that has plagued
philosophers for centuries.
To deny this qualitative difference would be, for me, silly, as silly as
denying that the physical world really exists. In any event, whatever stand
you take on this issue is based on personal preferences in metaphysics, and
not on physical proof.
------------------------------
Date: 14 Oct 83 1237 PDT
From: Dick Gabriel <RPG@SU-AI>
Subject: Elegance and Logical Purity
[Reprinted from the Prolog Digest.]
In the Lisp world, as you know, there are 2 Lisps that serve as
examples for this discussion: T and Common Lisp. T is based on
Scheme and, as such, it is relatively close to a `pure' Lisp or
even a lambda-calculus-style Lisp. Common Lisp is a large,
`user-convenient' Lisp. What are the relative successes of these
two Lisps ? T appeals to the few, me included, while Common Lisp
appeals to the many. The larger, user-convenient Lisps provide
programmers with tools that help solve problems, but they don't
dictate the style of the solutions.
Think of it this way: When you go to an auto mechanic and you
see he has a large tool chest with many tools, are you more or
less confident in him than if you see he has a small tool box
with maybe 5 tools ? Either way our confidence should be based
on the skill of the mechanic, but we expect a skilfull mechanic
with the right tools to be more efficient and possibly more
accurate than the mechanic who has few tools, or who merely has
tools and raw materials for making further tools.
One could take RPLACA as an analog to a user-convenience in this
situation. We do not need RPLACA: it messes up the semantics, and
we can get around it with other, elegant and pure devices. However,
RPLACA serves user convenience by providing an efficient means of
accomplishing an end. In supplying RPLACA, I, the implementer,
have thought through what the user is trying to do. No user would
appreciate it if I suggested that I knew better than he what he is
doing and to propose he replace all list structure that he might
wish to use with side-effect with closures and to then hope for
a smarter compiler someday.
I think it shows more contempt of users' abilities to dictate a
solution to him in the name of `elegance and logical purity' than
for me to think through what he wants for him.
I am also hesitant to foist on people systems or languages that
are so elegant and pure that I have trouble explaining it to users
because I am subject to being ``muddled about them myself.''
Maybe it is stupid to continue down the Lisp path, but Lisp is the
second oldest lanuage (to FORTRAN), and people clamor to use it.
Recall what Joel Moses said when comparing APL with Lisp.
APL is perfect; it is like a diamond. But like a diamond
you cannot add anything to it to make it more perfect, nor
can you add anything to it and have it remain a diamond.
Lisp, on the other hand, is like a ball of mud. You can add
more mud to it, and it is still a ball of mud.
I think user convenience is like mud.
-rpg-
------------------------------
Date: Tuesday, 18 October 1983 09:32:25 EDT
From: Joseph.Ginder at CMU-CS-SPICE
Subject: Common Lisp Motivation
[Reprinted from the Prolog Digest.]
Being part of the Common Lisp effort, I would like to express an
opinion about the reasons for the inclusion of so many "impurities" in
Common Lisp that differs from that expressed by Fernando Pereira in
the last Prolog Digest. I believe the reason for including much of
what is now Common Lisp in the Common Lisp specification was an effort
to provide common solutions to common problems; this is as opposed to
making concessions to language limitations or people's (in)ability to
write smart compilers. In particular, the reference to optimizing
"inefficient copying into efficient replacement" does not seem a
legitimate compiler optimization (in the general sense) -- this
clearly changes program semantics. (In the absence of side effects,
this would not be a problem, but note that some side effect is
required to do IO.) For a good statement of the goals of the Common
Lisp effort, see Guy Steele's paper in the 1982 Lisp and Functional
Programming Conference Proceedings.
Let me hasten to add that I agree with Pereira's concern that
expediency not be promoted to principle. It is for this very reason
that language features such as flavors and the loop construct were not
included in the Common Lisp specification -- we determined not to
standardize until concensus could be reached that a feature was both
widely accepted and believed to be a fairly good solution to a common
problem. The goal is not to stifle experimentation, but to promote
good solutions that have been found through previous experience. In
no sense do I believe anyone regards the current Common Lisp language
as the Final Word on Lisp.
Also, I have never interpreted Moses' diamond vs. mud analogy to have
anything to do with authoritarianism, only aesthetics. Do others ?
-- Joe Ginder
------------------------------
Date: 17 Oct 1983 07:38:44-PST
From: jmiller.ct@Rand-Relay
Subject: Reviewers needed for 1984 NCC
The Program Committee for the 1984 National Computer Conference, which will be
held in Las Vegas next July 9-12, is about to begin reviewing submitted
papers, and we are in need of qualified people who would be willing to serve
as reviewers. The papers would be sent to you in the next couple of weeks;
the reviews would have to be returned by the end of December.
Since NCC is sponsored by non-profit computer societies and is run largely by
volunteers, it is not possible to compensate reviewers for the time and
effort they contribute. However, to provide some acknowledgement of your
efforts, your name will appear in the conference proceedings and, if you
wish to attend NCC, we can provide you with advanced registration forms in
hotels close to the convention center. We are also trying to arrange
simplified conference registration for reviewers.
As the chair of the artificial intelligence track, I am primarily concerned
with finding people who would be willing to review papers on AI and/or
human-computer interaction. However, I will forward names of volunteers in
other areas to the appropriate chairs. If you would like to volunteer,
please send me your:
- name,
- mailing address,
- telephone number,
- arpanet or csnet address (if any), and
- subjects that you are qualified to review (it would be ideal if
you could use the ACM categorization scheme)
Either arpanet/csnet mail or US mail to my address below would be fine.
Thanks for your help.
James Miller
Computer * Thought Corporation
1721 West Plano Parkway
Plano, Texas 75075
JMILLER.CT @ RAND-RELAY
------------------------------
Date: Tue 11 Oct 83 10:44:08-CDT
From: Gordon Novak Jr. <CS.NOVAK@UTEXAS-20.ARPA>
Subject: $1K/mo Fellowships at Texas
The Department of Computer Sciences at the University of Texas at Austin
is initiating a Doctoral Fellows program, with fellowships available in
Spring 1984 and thereafter. Recipients must be admitted to the Ph.D.
program; November 1 is the applications deadline for Spring 1984.
Applicants must have a B.A. or B.S. in Computer Science, or equivalent,
a total GRE (combined verbal and quantitative) of at least 1400, and a
GPA of at least 3.5 . Doctoral Fellows will serve as Teaching
Assistants for two semesters, then will be given a fellowship (with no
TA duties) for one additional year. The stipend will be $1000/month.
Twenty fellowships per year will be available.
The Computer Sciences Department at the University of Texas is ranked in
the top ten departments by the Jones-Lindzey report. Austin is blessed
with an excellent climate and unexcelled cultural and recreational
opportunities.
For details, contact Dr. Jim Bitner (CS.BITNER@UTEXAS-20), phone (512)
471-4353, or write to Computer Science Department, University of Texas
at Austin, Austin, TX 78712.
------------------------------
End of AIList Digest
********************
∂20-Oct-83 1555 CLT SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
To: "@DIS.DIS[1,CLT]"@SU-AI
SPEAKER: S. Feferman
TITLE An introduction to "Reverse Mathematics"
TIME: Wednesday, Oct. 26, 4:15-5:30 PM
PLACE: Stanford Mathematics Dept. Faculty Lounge (383-N)
The talk will introduce and survey work by Friedman, Simpson and others,
providing sharp information in the form of equivalences as to which set-
existence axioms are needed to prove various statements in analysis and
algebra.
S. Feferman
∂21-Oct-83 0241 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #41
Received: from SU-SCORE by SU-AI with TCP/SMTP; 21 Oct 83 02:40:54 PDT
Date: Thursday, October 20, 1983 2:57PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #41
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Friday, 21 Oct 1983 Volume 1 : Issue 41
Today's Topics:
Implementations - is←all & bagof,
Suggestion - Consideration
----------------------------------------------------------------------
Date: Wed, 19 Oct 83 16:02:10 PDT
From: Bijan Arbab <v.Bijan@UCLA-LOCUS>
Subject: Why Use is←all
I hope the following will answer some of your questions with respect
to the is←all problem.
1. The is←all function or an equivalent one is a necessary part of
any algorithm that wants to do a breath first search. Therefore
it is only natural to ask whether such a function can be
implemented in pure Prolog or not?
In practice, however, it is O.K. to use other equivalent functions
such as setof or bagof. For that matter it is also O.K. to use the
function is←all that is defined by means of delete-axioms or
add-axioms, here is a definition of such a function:
is←all(A,Y,Q) <- Q & addax(save(Y)) & fail.
is←all(A,Y,Q) <- build-list(A).
build-list(Y.A) <- save(Y) & delax(save(Y)) & build-list(A).
build-list(nil).
2. The reason why abstract data types popped in the discussion is
that: in a paper I was reading about the subject they introduced
the NPO and CPO mechanism, which are a way of introducing
histories into computation. The authors felt that these were
necessary in order to efficiently implement abstract data types
in Prolog, not my claim.
However I felt that the NPO function is exactly what is needed
to solve this problem. With an NPO one can simply ask for the
next solution of a function. Note that NPO is not part of pure
Prolog either, however its use here appears to be more elegant
than add-ax and del-ax.
3. The is←all problem is a hard one, not because all the variables
must be reinitialized each time around the recursion ! In fact
the problem can not be solved "recursivly" in Prolog.
* tentative proof of above statement *
Any recursive definition for is-all would have to be of the form:
is←all(A,Y,Q) <- Q & {SOMETHING} & is-all(...).
where {SOMETHING} and (...) are poperly filled out.
But each new invocation of is←all will be getting evaluated in the
same environment of the father is←all, since the use of delete-axiom
or add-axiom is not allowed. Therefore the solution to Q for `son'
is←all is same as it was for `father' is←all. This implies that only
the first solution to Q is generated by such a function and the others
are not.
The other generation of solutions, will attempt to keep a list of
currently solved goals and make sure that a newly generated solution
is not in that list. This method is not a general one since if a
goal has two identical soultions only one of them will recorded.
All comments are welcome.
-- Bijan
------------------------------
Date: Tue, 18 Oct 83 18:43 PDT
From: Allen VanGelder <AVG@Diablo>
Subject: Let's Check Our Code
The "pure" version of is←all sent to the Digest on Fri Oct. 14 by
Vivek Sarkar has some serious problems, the first being that it
contains a syntax error (which is not trivial to fix).
I suggest that people test their code before broadcasting it, in
consideration of other people's time. Perhaps there are a few
wizards who can grant themselves exemption from this practice,
but we "rank and file" should observe this discipline.
------------------------------
Date: Wednesday, 19-Oct-83 00:55:07-GMT
From: Richard HPS (on ERCC DEC-10) <OKeefe.R.A.@EDXA>
Subject: More About bagof
A certain puzzle solution in this Digest contained the following
rather odd goal:
bagof(M, M↑(x(M, Z), nonvar(Z), Z = N), L)
[BTW: that program is an *excellent* debugging tool for Prolog
implementors. If your assert and retract implementations stand
up to amazing thrashing this program gives them, they'll stand
up to any amount of normal use. End of insult. End of tip.]
One of the odd things is just the x(M,Z), nonvar(Z), Z=N bit
itself. N is always instantiated to an integer, so x(M,Z),
Z==N would be exactly what is wanted. However, the x(M,Z)
facts in the data base always have M an integer, and Z either
an integer or a variable. Being a variable indicates "not yet
assigned", and writing x(M,unbound) for that case would (a)
eliminate the var/nonvar tests in the program and (b) let this
goal be written as just x(M,N).
But back to bagof. If you haven't got a copy of the 15-Dec-81
manual, here are pages 51 & 52, retyped:
setof(X, P, S)
Read this as "S is the set of all instances of X such that P is
provable, where that set is non-empty". The term P specifies a
goal or goals as in call(P). S is a set of terms represented as
a list of those terms, without duplicates, in the standard order
for terms [this is defined elsewhere in the manual, and I'm not
going to type that in]. If there are no instances of X such
that P is provable the predicate *fails*.
The variables appearing in the term X should not appear anywhere
else in the clause except within the term P. Obviously, the set
to be enumerated should be finite, and should be enumerable by
Prolog in finite time. It is possible for the provable
instances to contain variables, but in this case the list S will
only provide an imperfect representation of what is in reality
an infinite set. [There is no real need for the variables in X
to be absent from the rest of the clause. If a variable is
bound, all that means is that its binding will be copied into
the instances in the list S. If a variable is not bound, it
will still be unbound after the call to setof, just like 'not'.
And I don't know that I'd call a non-ground term an "imperfect
representation" of an infinite set.]
[*** This is the key paragraph ***]
If there are uninstantiated variables in P which do not also
appear in X, then a call to this evaluable predicate may
**backtrack**, generating alternative values for S corresponding
to different instantiations of the free variables of P. (It is
to cater for such usage that the set S is constrained to be
non-empty.) For example, the call
?- setof(X, X likes Y, S).
might produce two alternative solutions via backtracking:
Y = beer, S = [dick, harry, tom]
Y = cider, S = [bill, jan, tom]
(X remains unbound in both cases). The call
?- setof((Y,S), setof(X, X likes Y, S), SS).
would then produce
SS = [(beer,[dick,harry,tom]), (cider,[bill,jan,tom])]
[*** This paragraph explains ↑ ***]
Variables occurring in P will not be treated as free if they are
explicitly bound within P by an existential quantifier. An
existential quantification is written
Y↑Q
meaning "there exists a Y such that Q is true", where Y is some
Prolog variable. For example,
?- setof(X, Y↑(X likes Y), S).
would produce the single result
X = [bill, dick, harry, jan, tom]
in contrast to the earlier example.
bagof(X, P, Bag)
This is exactly the same as setof except that the list (or
alternative lists) returned will not be ordered, and may contain
duplicates. The effect of this relaxation is to save
considerable time and space in execution. [This may be
misleading. There are three components to the cost of setof,
and two to bagof. Both of them enumerate all solutions like
findall. That is one component. Both then do a sort to bring
together solutions belonging to the same alternative (bindings
for free variables in P). That is the second component. setof
then does another sort for each list of alternatives. So you
save O(N.lgN) time and space by using bagof instead of setof.
But that is less than 50% of the cost of bagof. When there are
NO free variables in P, bagof and setof don't do the first sort,
so the saving for this common case can be considerable. Also
bear in mind that the cost of sorting is strongly implementation
dependent; it could take O(N) space.]
The interpreter recognises this as meaning "there exists an X
such that P is true", and treats it as equivalent to call(P).
The use of this explicit existential quantifier outside the
setof and bagof constructs is superfluous.
[End of manual extract.]
What is odd about bagof(M, M↑(x(M, Z), nonvar(Z), Z=N), L) is
thus that a variable which ought NOT to be explicitly quantified
IS (M), and that a variable which SHOULD be (Z) ISN'T. How come
it works ?
To see why quantifying a variable that appears in the template
doesn't confuse Prolog at all while it confuses people badly,
you'll have to look at the code I published here recently. To
see why not quantifying Z does no harm, you have only to realise
that all instances of Z are unified with the same *integer* N,
so that bagof does a findall and a sort, and then discovers that
all of the solutions have the same value for Z. If the goal had
been instead
bagof(M, Z↑(x(M, Z), nonvar(Z), Z=N), L)
bagof would just have done a findall.
Now I don't mean to imply that the author of the program in
question doesn't understand the X↑P notation perfectly well.
This may be just the kind of typing mistake we all make,
which just increased the running time without producing wrong
answers, and could easily go unnoticed for 20 years. But
there seem to be too many Prologs that lack bagof, and I
wouldn't want anyone to take this particular goal as an example
to be imitated. By the same token, there could well be similar
mishaps in some of my utilities, and I would be grateful if
someone spotted them while I still have access to this net.
One thing which the manual should mention and doesn't is that
when you compile a DEC-10 Prolog program that uses setof, bagof,
or for that matter findall, it often stops working. The problem
is that the interface between compiled and interpreted code is
mostly one-way:
compiled code can get at any interpreted code, either by using
call or just by calling a predicate that happens not to be
interpeted. But the interpreter can only get at compiled code
which has :-public declarations. And call is the interpreter.
So the generators in
setof(Template, Generator, Set)
bagof(Template, Generator, Bag)
findall(Template, Generator, List)
and the goals in
\+ Goal
not Goal
once(Goal) % once(X) :- call(X), !.
forall(Generator, Test) % forall(G,T) :- \+ (G, \+ T).
are handled by the *interpreter*, and MUST have :-public declarations.
Omitting these declarations is a very common mistake. You don't learn
to avoid it, but you learn to look for it when your compiled program
doesn't work but it did when interpreted. If you haven't got a
compiler, the problem doesn't arise, and there are a couple of other
compilers than DEC-10 Prolog which use a different interface between
compiled and interpreted code (in fact they are just incremental
compilers), and again the problem doesn't arise.
------------------------------
End of PROLOG Digest
********************
∂21-Oct-83 1510 @SU-SCORE.ARPA:WIEDERHOLD@SUMEX-AIM.ARPA Re: Math/CS Library Security vs. no key policy
Received: from SU-SCORE by SU-AI with TCP/SMTP; 21 Oct 83 15:10:08 PDT
Received: from SUMEX-AIM.ARPA by SU-SCORE.ARPA with TCP; Fri 21 Oct 83 15:11:03-PDT
Date: Fri 21 Oct 83 15:11:48-PDT
From: Gio Wiederhold <WIEDERHOLD@SUMEX-AIM.ARPA>
Subject: Re: Math/CS Library Security vs. no key policy
To: LIBRARY@SU-SCORE.ARPA
cc: faculty@SU-SCORE.ARPA, ark@SU-AI.ARPA
In-Reply-To: Message from "C.S./Math Library <LIBRARY@SU-SCORE.ARPA>" of Thu 20 Oct 83 20:56:05-PDT
Thanks for inviting me. I have followed the commentary with mixed emotions:
i dont like losses and I think we have many people we can trust with keys.
I would suggest the following:
A key system which records who uses the keys and when they are used.
A strong reinforcement of the policy not to let others in, or if they are let in having them sign their name, affiliation etc.
A library rearrangement so that day-time thefts can be reduced by:
1 making sure that all leavers pass the length of the desk.
2 Have a place for leaving bags outside the library.
3 Have a sign in policy.
This suggestion hence means that i am neither fully on one side or the other.
In a situations where we seem to far from unanimity in either direction some
new ways must be sought.
Best wishes, you run a great service to the department!
Gio
-------
∂24-Oct-83 1255 LAWS@SRI-AI.ARPA AIList Digest V1 #81
Received: from SRI-AI by SU-AI with TCP/SMTP; 24 Oct 83 12:54:52 PDT
Date: Monday, October 24, 1983 8:58AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #81
To: AIList@SRI-AI
AIList Digest Monday, 24 Oct 1983 Volume 1 : Issue 81
Today's Topics:
Lisp Machines & Fuzzy Logic - Request,
Rational Psychology,
Reports - AI and Robotics Overviews & Report Sources,
Bibliography - Parallelism and Conciousness,
Learning - Machine Learning Course
----------------------------------------------------------------------
Date: Sun, 23 Oct 83 16:00:07 EDT
From: Ferd Brundick (LTTB) <fsbrn@brl-voc>
Subject: info on Lisp Machines
We are about to embark on an ambitious AI project in which we hope
to develop an Expert System. The system will be written in Lisp
(or possibly Prolog) and will employ fuzzy logic and production
rules. In my role as equipment procurer and novice Lisp programmer,
I would like any information regarding Lisp machines, eg, what is
available, how do the various machines compare, etc. If this topic
has been discussed before I would appreciate pointers to the info.
On the software side, any discussions regarding fuzzy systems would
be welcomed. Thanks.
dsw, fferd
<fsbrn@brl-voc>
------------------------------
Date: 26 Sep 83 10:01:56-PDT (Mon)
From: ihnp4!drux3!drufl!samir @ Ucb-Vax
Subject: Rational Psychology
Article-I.D.: drufl.670
Norm,
Let me elaborate. Psychology, or logic of mind, involves BOTH
rational and emotional processes. To consider one exclusively defeats
the purpose of understanding.
I have not read the article we are talking about so I cannot
comment on that article, but an example of what I consider a "Rational
Psychology" theory is "Personal Construct Theory" by Kelly. It is an
attractive theory but, in my opinion, it falls far short of describing
"logic of mind" as it fails to integrate emotional aspects.
I consider learning-concept formation-creativity to have BOTH
rational and emotional attributes, hence it would be better if we
studied them as such.
I may be creating a dichotomy where there is none. (Rational vs.
Emotional). I want to point you to an interesting book "Metaphors we
live by" (I forget the names of Authors) which in addition to discussing
many other ai-related (without mentioning ai) concepts discusses the
question of Objective vs. Subjective, which is similar to what we are
talking here, Rational vs. Emotional.
Thanks.
Samir Shah
AT&T Information Systems, Denver.
drufl!samir
------------------------------
Date: Fri 21 Oct 83 11:31:59-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Overview Reports
I previously mentioned a NASA report described in IEEE Spectrum.
I now have further information from NTIS. The one mentioned
was the last of the following:
An Overview of Artificial Intelligence and Robotics:
Volume II - Robotics, NBSIR-82-2479, March 1982
PB83-217547 Price $13.00
An Overview of Expert Systems, NBSIR-82-2505, May 1982
(Revised October 1982)
PB83-217562 Price $10.00
An Overview of Computer Vision, NBSIR-822582 (or possibly
listed as NBSIR-832582), September 1982
PB83-217554 Price $16.00
An Overview of Computer-Based Natural Language Processing,
NASA-TM-85635 NBSIR-832687 N83-24193 Price $10.00
An Overview of Artificial Intelligence and Robotics;
Volume I - Artificial Intelligence, June 1983
NASA-TM-85836 Price $10.00
The ordering address is
United States Department of Commerce
National Technical Information Service
5285 Port Royal Road
Springfield, VA 22161
-- Ken Laws
------------------------------
Date: Fri 21 Oct 83 11:38:42-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Report Sources
The NTIS literature I have also lists some other useful sources:
University Microfilms, Inc.
300 N. Zeeb Road
Ann Arbor, MI 48106
National Translation Center
SLA Translation Center, The John Crerar Library
35 West 33rd Street
Chicago, IL 60616
Library of Congress,
Photoduplicating Service
Washington, D.C. 20540
American Institute of Aeronautics & Astronautics
Technical Information Service
555 West 57th Street, 12th Floor
New York, NY 10019
National Bureau of Standards
Gaithersburg, MD 20234
U.S. Dept. of Energy,
Div. of Technical Information
P.O. Box 62
Oak Ridge, TN 37830
NASA Scientific and Technical Facility
P.O. Box 8757
Balt/Wash International Airport
Baltimore, MD 21240
-- Ken Laws
------------------------------
Date: Sun, 23 Oct 83 12:21:54 PDT
From: Rik Verstraete <rik@UCLA-CS>
Subject: Bibliography (parallelism and conciousness)
David Rogers asked me if I could send him some of my ``favorite''
readings on the subject ``parallelism and conciousness.'' I searched
through my list, and came up with several references which I think
might be interesting to everybody. Not all of them are directly
related to ``parallelism and conciousness,'' but nevertheless...
Albus, J.S., Brains, Behavior, & Robotics, Byte Publications Inc.
(1981).
Arbib, M.A., Brains, Machines and Mathematics, McGraw-Hill Book
Company, New York (1964).
Arbib, M.A., The Metaphorical Brain, An Introduction to Cybernetics as
Artificial Intelligence and Brain Theory, John Wiley & Sons, Inc.
(1972).
Arbib, M.A., "Automata Theory and Neural Models," Proceedings of the
1974 Conference on Biologically Motivated Automata Theory, pp. 13-18
(June 19-21, 1974).
Arbib, M.A., "A View of Brain Theory," in Selforganizing Systems, The
Emergence of Order, ed. F.E. Yates, Plenum Press, New York (1981).
Arbib, M.A., "Modelling Neural Mechanisms of Visuomotors Coordination
in Frogs and Toad," in Competition and Cooperation in Neural Nets, ed.
Amari, S., and M.A. Arbib, Springer-Verlag, Berlin (1982).
Barto, A.G. and R.S. Sutton, "Landmark Learning: An Illustration of
Associative Search," Biological Cybernetics Vol. 42(1) pp. 1-8
(November 1981).
Barto, A.G., R.S. Sutton, and C.W. Anderson, "Neuron-Like Adaptive
Elements that can Solve Difficult Learning Control Problems," Coins
Technical Report 82-20, Computer and Information Science Department,
University of Massachusetts, Amherst, MA (1982).
Begley, S., J. Carey, and R. Sawhill, "How the Brain Works," Newsweek,
(February 7, 1983).
Davis, L.S. and A. Rosenfeld, "Cooperating Processes for Low-Level
Vision: A Survey," Aritificial Intelligence Vol. 17 pp. 245-263
(1981).
Doyle, J., "The Foundations of Psychology," CMU-CS-82-149, Department
of Computer Science, Carnegie-Mellon University, Pittsburgh, PA
(February 18, 1982).
Feldman, J.A., "Memory and Change in Connection Networks," Technical
Report 96, Computer Science Department, University of Rochester,
Rochester, NY (December 1981).
Feldman, J.A., "Four Frames Suffice: A Provisionary Model of Vision and
Space," Technical Report 99, Computer Science Department, University of
Rochester, Rochester, NY (September 1982).
Grossberg, S., "Adaptive Resonance in Development, Perception and
Cognition," SIAM-AMS Proceedings Vol. 13 pp. 107-156 (1981).
Harth, E., "On the Spontaneous Emergence of Neuronal Schemata," pp.
286-294 in Competition and Cooperation in Neural Nets, ed. Amari, S.,
and M.A. Arbib, Springer-Verlag, Berlin (1982).
Hayes-Roth, B., "Implications of Human Pattern Processing for the
Design of Artificial Knowledge Systems," pp. 333-346 in
Pattern-Directed Inference Systems, ed. Waterman, D.A., and F. Hayes-
Roth, Academic Press, New York (1978).
Hofstadter, D.R., Godel, Escher, Bach: An Eternal Golden Braid, Vintage
Books,, New York (1979).
Hofstadter, D.R. and D.C. Dennett, The Mind's I, Basic Books, Inc., New
York (1981).
Holland, J.H., Adaption in Natural and Artificial Systems, The
University of Michigan Press, Ann Arbor (1975).
Holland, J.H. and J.S. Reitman, "Cognitive Systems Based on Adaptive
Algorithms," pp. 313-329 in Pattern-Directed Inference Systems, ed.
Waterman, D.A., and F. Hayes-Roth, Academic Press, New York (1978).
Kauffman, S., "Behaviour of Randomly Constructed Genetic Nets: Binary
Element Nets," pp. 18-37 in Towards a Theoretical Biology, Vol 3:
Drafts, ed. C.H. Waddington,Edinburgh University Press (1970).
Kauffman, S., "Behaviour of Randomly Constructed Genetic Nets:
Continuous Element Nets," pp. 38-46 in Towards a Theoretical Biology,
Vol 3: Drafts, ed. C.H. Waddington, Edinburgh University Press (1970).
Kent, E.W., The Brains of Men and Machines, Byte/McGraw-Hill,
Peterborough, NH (1981).
Klopf, A.H., The Hedonistic Neuron, Hemisphere Publishing Corporation,
Washington (1982).
Kohonen, T., "A Simple Paradigm for the Self-Organized Formation of
Structured Feature Maps," in Competition and Cooperation in Neural
Nets, ed. Amari, S., and M.A. Arbib, Springer-Verlag, Berlin (1982).
Krueger, M.W., Artificial Reality, Addison-Wesley Publishing Company
(1983).
McCulloch, W.S. and W. Pitts, "A Logical Calculus of the Ideas Immanent
in Nervous Activity," Bulletin of Mathematical Biophysics Vol. 5(4)
pp. 115-133 (December 1943).
Michalski, R.S., J.G. Carbonell, and T.M. Mitchell, Machine Learning,
An Artificial Intelligence Approach, Tioga Publishing Co, Palo Alto, CA
(1983).
Michie, D., "High-Road and Low-Road Programs," AI Magazine, pp. 21-22
(Winter 1981-1982).
Narendra, K.S. and M.A.L. Thathachar, "Learning Automata - A Survey,"
IEEE Transactions on Systems, Man, and Cybernetics Vol. SMC-4(4) pp.
323-334 (July 1974).
Nilsson, N.J., Learning Machines: Foundations of Trainable Pattern-
Classifying Systems, McGraw-Hill, New-York (1965).
Palm, G., Neural Assemblies, Springer-Verlag (1982).
Pearl, J., "On the Discovery and Generation of Certain Heuristics," The
UCLA Computer Science Department Quarterly Vol. 10(2) pp. 121-132
(Spring 1982).
Pistorello, A., C. Romoli, and S. Crespi-Reghizzi, "Threshold Nets and
Cell-Assemblies," Information and Control Vol. 49(3) pp. 239-264 (June
1981).
Truxal, C., "Watching the Brain at Work," IEEE Spectrum Vol. 20(3) pp.
52-57 (March 1983).
Veelenturf, L.P.J., "An Automata-Theoretical Approach to Developing
Learning Neural Networks," Cybernetics and Systems Vol. 12(1-2) pp.
179-202 (January-June 1981).
------------------------------
Date: 20 October 1983 1331-EDT
From: Jaime Carbonell at CMU-CS-A
Subject: Machine Learning Course
[Reprinted from the CMU-AI bboard.]
[I pass this on as a list of topics and people in machine learning. -- KIL]
The schedule for the remaining classes in the Machine Learning
course (WeH 4509, tu & thu at 10:30) is:
Oct 25 - "Strategy Acquisition" -- Pat Langley
Oct 27 - "Learning by Chunking & Macro Structures" -- Paul Rosenbloom
Nov 1 - "Learning in Automatic Programming" -- Elaine Kant
Nov 3 - "Language Acquisition I" -- John Anderson
Nov 8 - "Discovery from Empirical Observations" -- Herb Simon
Nov 10 - "Language Acquisition II" -- John Anderson or Brian McWhinney
Nov 15 - "Algorithm Discovery" -- Elaine Kant or Allen Newell
Nov 17 - "Learning from Advice and Instruction" -- Jaime Carbonell
Nov 22 - "Conceptual Clustering" -- Pat Langley
Nov 29 - "Learning to Learn" -- Pat Langley
Dec 1 - "Genetic Learning Methods" -- Stephen Smith
Dec 6 - "Why Perceptrons Failed" -- Geoff Hinton
Dec 8 - "Discovering Regularities in the Environment" -- Geoff Hinton
Dec 13 - "Trainable Stochastic Grammars" -- Peter Brown
------------------------------
End of AIList Digest
********************
∂24-Oct-83 1517 FISCHLER@SRI-AI.ARPA Add to Mailing List
Received: from SRI-AI by SU-AI with TCP/SMTP; 24 Oct 83 15:17:40 PDT
Date: Mon 24 Oct 83 15:18:15-PDT
From: FISCHLER@SRI-AI.ARPA
Subject: Add to Mailing List
To: csli-friends@SRI-AI.ARPA
Please add my name to the mailing list for anouncements
-------
∂24-Oct-83 2139 JF@SU-SCORE.ARPA november bats
Received: from SU-SCORE by SU-AI with TCP/SMTP; 24 Oct 83 21:39:27 PDT
Date: Mon 24 Oct 83 21:39:44-PDT
From: Joan Feigenbaum <JF@SU-SCORE.ARPA>
Subject: november bats
To: aflb.su: ;
who wants to speak? november bats will probably be held at stanford on
either 11/18 or 11/21. if you want to be the stanford speaker, let me know.
thanks,
joan
-------
∂24-Oct-83 2225 BRODER@SU-SCORE.ARPA Next AFLB talk(s)
Received: from SU-SCORE by SU-AI with TCP/SMTP; 24 Oct 83 22:25:10 PDT
Date: Mon 24 Oct 83 22:25:06-PDT
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Next AFLB talk(s)
To: aflb.all@SU-SCORE.ARPA
cc: sharon@SU-SCORE.ARPA
Stanford-Office: MJH 325, Tel. (415) 497-1787
N E X T A F L B T A L K (S)
10/27/83 - Prof. T. C. Hu (UCSD)
"Graph folding and programmable logic arrays"
-- No abstract available yet --
******** Time and place: Oct. 27, 12:30 pm in MJ352 (Bldg. 460) *******
11/3/83 - Dr. J. M. Robson
"The Complexity of GO and Other Games"
For GO as played in Japan, as for chess and checkers, deciding whether
White can force a win from a given position is an exponential time
complete problem. The Chinese rules of GO differ from the Japanes in
a manner which appears minor but invalidates both the upper and the
lower bound parts of the Exptime completeness proof. Making a similar
change to other games results in their decision problem becoming
exponential time complete.
******** Time and place: Nov. 3, 12:30 pm in MJ352 (Bldg. 460) *******
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Regular AFLB meetings are on Thursdays, at 12:30pm, in MJ352 (Bldg.
460).
If you have a topic you would like to talk about in the AFLB seminar
please tell me. (Electronic mail: broder@su-score.arpa, Office: Jacks
Hall 325, 497-1787) Contributions are wanted and welcome. Not all
time slots for the autumn quarter have been filled so far.
For more information about future AFLB meetings and topics you might
want to look at the file [SCORE]<broder>aflb.bboard .
- Andrei Broder
-------
∂25-Oct-83 1400 @SRI-AI.ARPA:desRivieres.PA@PARC-MAXC.ARPA CSLI Activities for Thursday Oct. 27th
Received: from SRI-AI by SU-AI with TCP/SMTP; 25 Oct 83 14:00:05 PDT
Received: from PARC-MAXC.ARPA by SRI-AI.ARPA with TCP; Tue 25 Oct 83 14:01:16-PDT
Date: Tue, 25 Oct 83 13:59 PDT
From: desRivieres.PA@PARC-MAXC.ARPA
Subject: CSLI Activities for Thursday Oct. 27th
To: csli-friends@SRI-AI.ARPA
Reply-to: desRivieres.PA@PARC-MAXC.ARPA
CSLI SCHEDULE FOR THURSDAY, October 27, 1983
10:00 Research Seminar on Natural Language
Speaker: Ray Perrault (CSLI-SRI)
Title: "Speech Acts and Plans"
Place: Redwood Hall, room G-19
12:00 TINLunch
Discussion leader: Jeffrey S. Rosenschein
Paper for discussion: "Synchronization of Multi-Agent Plans"
by Jeffrey S. Rosenschein.
Place: Ventura Hall
2:00 Research Seminar on Computer Languages
Speaker: Peter Deutsch (Xerox PARC)
Title: "Smalltalk-80: Language and Style in real
Object-oriented Programming System"
Place: Redwood Hall, room G-19
3:30 Tea
Place: Ventura Hall
4:15 Colloquium
Speaker: Jay M. Tenenbaum (Fairchild AI Lab)
Title: "A.I. Research at Fairchild"
Place: Redwood Hall, room G-19
Note to visitors:
Redwood Hall is close to Ventura Hall on the Stanford Campus. It
can be reached from Campus Drive or Panama Street. From Campus Drive
follow the sign for Jordan Quad. Parking is in the C-lot between
Ventura and Jordan Quad.
-------
∂25-Oct-83 1413 @SRI-AI.ARPA:TW@SU-AI This week's talkware seminar - Greg Nelson - Durand 401
Received: from SRI-AI by SU-AI with TCP/SMTP; 25 Oct 83 14:10:45 PDT
Received: from SU-AI.ARPA by SRI-AI.ARPA with TCP; Tue 25 Oct 83 14:09:33-PDT
Date: 25 Oct 83 1404 PDT
From: Terry Winograd <TW@SU-AI>
Subject: This week's talkware seminar - Greg Nelson - Durand 401
To: "@377.DIS[1,TW]"@SU-AI, su-bboards@SU-AI
Talkware Seminar - CS 377
Date: October 26
Speaker: Greg Nelson (Xerox PARC)
Topic: JUNO: a constraint based language for graphics
Time: 2:15 - 4
Place: Durand 401 **** NEW LOCATION FOR VIDEO ***
Abstract: Juno is an interactive, programmable, constraint-based system
for producing graphic images, such as typographic artwork or technical
illustrations. The user of the system can choose between "drawing" an
image on a rastor display, constructing the image by the execution of a
Juno program, or using these modes in combination. Both modes are
"constraint-oriented", that is, instead of giving explicit coordinates
for each control point of the image, the user can position a point by
specifying the geometric relation between it and other control points.
Juno is under development at Xerox PARC. A videotape of the program
will be shown.
∂25-Oct-83 1417 @SRI-AI.ARPA:TW@SU-AI next week's talkware - Nov 1 TUESDAY - K. Nygaard
Received: from SRI-AI by SU-AI with TCP/SMTP; 25 Oct 83 14:17:18 PDT
Received: from SU-AI.ARPA by SRI-AI.ARPA with TCP; Tue 25 Oct 83 14:19:13-PDT
Date: 25 Oct 83 1407 PDT
From: Terry Winograd <TW@SU-AI>
Subject: next week's talkware - Nov 1 TUESDAY - K. Nygaard
To: "@377.DIS[1,TW]"@SU-AI
Date: Tuesday, Nov 1 *** NOTE ONE-TIME CHANGE OF DATE AND TIME ***
Speaker: Kristen Nygaard (University of Oslo and Norwegian Computing Center)
Topic: SYDPOL: System Development and Profession-Oriented Languages
Time: 1:15-2:30
Place: Poly Sci Bldg. Room 268. ***NOTE NONSTANDARD PLACE***
A new project involving several universities and research centers in three
Scandinavian countries has been establihed to create new methods of system
development, using profession-oriented languages. They will design
computer-based systems that will operate in work associated with
professions (the initial application is in hospitals), focussing on the
problem of facilitating cooperative work among professionals. One aspect
of the research is the development of formal languages for describing the
domains of interest and providing an interlingua for the systems and for
the people who use them. This talk will focus on the language-design
research, its goals and methods.
∂25-Oct-83 1518 @SRI-AI.ARPA:GOGUEN@SRI-CSL [GOGUEN at SRI-CSL: rewrite rule seminar]
Received: from SRI-AI by SU-AI with TCP/SMTP; 25 Oct 83 15:18:41 PDT
Received: from SRI-CSL by SRI-AI.ARPA with TCP; Tue 25 Oct 83 15:18:44-PDT
Date: 25 Oct 1983 1512-PDT
From: GOGUEN at SRI-CSL
Subject: [GOGUEN at SRI-CSL: rewrite rule seminar]
To: csli-friends at SRI-AI, briansmith at PARC-MAXC, kjb at SRI-AI
Date: 25 Oct 1983 1509-PDT
From: GOGUEN at SRI-CSL
Subject: rewrite rule seminar
To: Elspas, JGoldberg, Goguen, Green, DHare, Kautz, Lamport, Levitt,
Melliar-Smith, Meseguer, Moriconi, Neumann, Pease, Schwartz, Shostak,
DBerson, Oakley, Crow, Ashcroft, Denning, Geoff, Rushby, Jagan,
Jouannaud, Nelson
cc: jk at SU-AI, waldinger at SRI-AI, stickel at SRI-AI, pereira at SRI-AI
TENTATIVE PROGRAM FOR TERM REWRITING SEMINAR
--------------------------------------------
FIRST TALK:
20 October 1983, Thursday, 3:30-5pm, Jean-Pierre Jouannaud,
Room EL381, SRI
This first talk will be an overview: basic mechanisms, solved & unsolved
problems, and main applications of term rewriting systems.
We will survey the literature, also indicating the most important results
and open problems, for the following topics:
1. definition of rewriting
2. termination
3. For non-terminating rewritings: Church-Rosser properties, Sound computing
strategies, Optimal computing strategies
4. For terminating rewritings: Church-Rosser properties, completion
algorithm, inductive completion algorithm, narrowing process
Three kind of term rewriting will be discussed: Term Rewriting
Systems (TRS), Equational Term Rewriting Systems (ETRS) and Conditional Term
Rewriting Systems (CTRS).
--------------------------------------------------
Succeeding talks should be more technical. The accompanying bibliographical
citations suggest important and readible references for each topic. Do we
have any volunteers for presenting these topics?
---------------------------------------------------
Second talk, details of terminating TRS:
Knuth and Bendix; Dershowitz TCS; Jouannaud; Lescanne & Reinig,
Formalization of Programming Concepts, Garmisch; Huet JACM; Huet JCSS; Huet
& Hullot JACM; Fay CADE 78; Hullot CADE 80; Goguen CADE 80.
Third and fourth talk, details of terminating ETRS:
Jouannaud & Munoz draft; Huet JACM; Lankford & Ballantine draft; Peterson &
Stickel JACM; Jouannaud & Kirchner POPL; Kirchner draft; Jouannaud, Kirchner
& Kirchner ICALP.
Fifth talk, details of turning the Knuth-Bendix completion procedure into a
complete refutational procedure for first order built in theories, with
applications to PROLOG:
Hsiang thesis; Hsiang & Dershowitz ICALP; Dershowitz draft "Computing
with TRW".
Sixth and seventh talks, non-terminating TRS and CTRS:
O'Donnel LNCS; Huet & Levy draft; Pletat, Engels and Ehrich draft; Bergstra
& Klop draft.
Eighth talk, terminating CTRS:
Remy thesis.
(More time may be needed for some talks.)
-------
-------
∂25-Oct-83 1551 ELYSE@SU-SCORE.ARPA Newsletter
Received: from SU-SCORE by SU-AI with TCP/SMTP; 25 Oct 83 15:51:04 PDT
Date: Tue 25 Oct 83 15:49:46-PDT
From: Elyse Krupnick <ELYSE@SU-SCORE.ARPA>
Subject: Newsletter
To: faculty@SU-SCORE.ARPA
Stanford-Phone: (415) 497-9746
You may know that we are putting together a newsletter for the department and
the alumni. One of the things we'd like to tell people about has to do with
honors you have received in the last year. Please, don't be modest. We are
anxious to know. Send any info to me at your earliest convenience. Thanks,
Elyse.
-------
∂25-Oct-83 1610 PETERS@SRI-AI.ARPA Meeting this Friday
Received: from SRI-AI by SU-AI with TCP/SMTP; 25 Oct 83 16:10:30 PDT
Date: Tue 25 Oct 83 16:11:42-PDT
From: Stanley Peters <PETERS@SRI-AI.ARPA>
Subject: Meeting this Friday
To: csli-b1@SRI-AI.ARPA
cc: csli-friends@SRI-AI.ARPA
This Friday there will be a meeting of projects B1 )Extending
Semantics Theories) and D4 (The Commonsense World) in the Ventura
Hall Seminar Room from 3:30 till 5:00. Jens-Erik Fenstad will
talk about work by Jan Tore Loenning on Mass Terms and
Quantification.
-------
∂25-Oct-83 1629 BRODER@SU-SCORE.ARPA Abstract of T. C. Hu's talk
Received: from SU-SCORE by SU-AI with TCP/SMTP; 25 Oct 83 16:29:22 PDT
Date: Tue 25 Oct 83 16:28:13-PDT
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Abstract of T. C. Hu's talk
To: aflb.all@SU-SCORE.ARPA
Stanford-Office: MJH 325, Tel. (415) 497-1787
10/27/83 - Prof. T. C. Hu (UCSD)
"Graph folding and programmable logic arrays"
We consider a new problem "maximum folding in a graph" which is
similar to "maximum matching in a graph." Given a graph G with all
its arcs colored red and the complement G' with all its arcs colored
green, find a maximum set of arcs in G' such that the selected green
arcs form a matching in G' and there exists no cycle formed by
alternating red and selected green arcs. The application to the
design of programmable logic arrays is discussed.
******** Time and place: Oct. 27, 12:30 pm in MJ352 (Bldg. 460) *******
-------
∂25-Oct-83 1646 BRODER@SU-SCORE.ARPA Special AFLB talk!
Received: from SU-SCORE by SU-AI with TCP/SMTP; 25 Oct 83 16:46:43 PDT
Date: Tue 25 Oct 83 16:41:29-PDT
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Special AFLB talk!
To: aflb.all@SU-SCORE.ARPA
cc: sharon@SU-SCORE.ARPA
Stanford-Office: MJH 325, Tel. (415) 497-1787
S P E C I A L A F L B T A L K
Note day (Tuesday, 11/1, 12:30) and place (MJH252)!! This is in
addition to the regular AFLB next week (J. M. Robson)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
11/1/83 - Prof. P. Hell (Simon Fraser University)
"Sorting in Rounds"
In studies of consumer preferences, the need arises for sorting
algorithms in which the (binary) comparison are arranged in a fixed
number of rounds, the comparisons in each round being evaluated
simultaneously. (Hence, they may be viewed as parallel sorting
algorithms with an O(1) parallel time bound) Using optimal algorithms
for merging in rounds, sorting in several rounds can be done with a
surprisingly small number of comparisons. For instance, n (linearly
ordered) keys can be sorted in, say, 50 rounds with O(n↑(11/10))
comparisons. In two rounds, no explicit method uses fewer than c*n↑2
comparisons. The existence of subquadratic algorithms was proved by
Haggkvist and Hell, who also observed that at least c*n↑(3/2) are
needed. In a more recent work, Thomason and Ballabas used
sophisticated methods of probabilistic graph theory to establish the
existence of O(n↑(3/2) log n) algorithms. An optimal algorithm for
max finding in rounds turns out to be an easy corollary of Turan's
theorem in graph theory.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Regular AFLB meetings are on Thursdays, at 12:30pm, in MJ352 (Bldg.
460).
If you have a topic you would like to talk about in the AFLB seminar
please tell me. (Electronic mail: broder@su-score.arpa, Office: Jacks
Hall 325, 497-1787) Contributions are wanted and welcome. Not all
time slots for the autumn quarter have been filled so far.
For more information about future AFLB meetings and topics you might
want to look at the file [SCORE]<broder>aflb.bboard .
- Andrei Broder
-------
∂25-Oct-83 1911 JF@SU-SCORE.ARPA testing
Received: from SU-SCORE by SU-AI with TCP/SMTP; 25 Oct 83 19:10:58 PDT
Date: Tue 25 Oct 83 19:07:23-PDT
From: Joan Feigenbaum <JF@SU-SCORE.ARPA>
Subject: testing
To: mpc-theory: ;
I have been entrusted with the care of stanford's copy of the BATS mailing
list. I am trying to update it. If you received more than one copy of this
message, please let me know.
thanks,
joan
(jf@su-score)
-------
∂25-Oct-83 1917 @SRI-AI.ARPA:GOGUEN@SRI-CSL correction to rewrite rule seminar date
Received: from SRI-AI by SU-AI with TCP/SMTP; 25 Oct 83 19:17:41 PDT
Received: from SRI-CSL by SRI-AI.ARPA with TCP; Tue 25 Oct 83 18:16:22-PDT
Date: 25 Oct 1983 1808-PDT
From: GOGUEN at SRI-CSL
Subject: correction to rewrite rule seminar date
To: briansmith at PARC-MAXC, kjb at SRI-AI, csli-friends at SRI-AI
THAT LAST MESSAGE SHOULD HAVE SAID OCTOBER 27!!
(so it's not too late for you to come on by after all!)
-- joseph
-------
∂26-Oct-83 0227 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #42
Received: from SU-SCORE by SU-AI with TCP/SMTP; 26 Oct 83 02:27:05 PDT
Date: Tuesday, October 25, 1983 8:47PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #42
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Wednesday, 26 Oct 1983 Volume 1 : Issue 42
Today's Topics:
Implementations - User Convenience Vs. Elegance
----------------------------------------------------------------------
Date: Thu, 20 Oct 83 11:01 EDT
From: Chris Moss <Moss.UPenn@Rand-Relay>
Subject: What is Prolog/Style vs Convenience
I think we should add "assert" and "retract" to the list of
"GOTO-like" primitives in Prolog. It is clear that they can lead to
very bad programming style. I think Richard's piece in May Sigplan is
excellent (read it if you haven't already).
But it raises the question of what exactly Prolog is. If we say that
it is a Horn-Clause programming system whose semantics is defined
exactly (E.g. by van Emden and Kowalski) then we even exclude
cuts/slashes from Prolog which are accepted by nearly everyone as
being an essential part of Prolog (even though parallel systems such
as Concurrent Prolog will use different semantics for a similar
looking idea).
On the other hand, Richard's assumption that the Edinburgh
implementation is normative for such things as setof is going too far
in the other direction. Clearly clarity is served if we reserve one
name for one type of object, but to say that you must use the word
"find←all" for a flat "setof" is curious. Which implementation
introduced the the word "find←all" ? I don't know.
This all points out that we need a continuing clean semantics for
Prolog as it is extended - that is the only way of avoiding these
dissensions. Anything that can be done from within Prolog should be
given a normative Prolog interpretation in the reference manual of an
implementation. Things that cannot be expressed in Prolog should be
given a proper denotational or similar semantics so that different
implementations behave the same way.
Most evaluable predicates should be actually evaluable functions.
"succ" is a function: if we try to make it non-detirministic, we
freeze in some totally arbitrary evaluation order which is extraneous
to it. Given the system "succ" one can easily write a
non-deterministic one in Prolog to suit one's need of the moment.
I believe that "assert" and "retract" should have two extra
parameters: the database being added to, and the database created by
the additions (or deletions). In this way all the messy implementation
details are eliminated. For convenience we might have an additional
pair which act on the "default" database or workspace, whose names can
be handled by the system. Which database one is working on at any time
can then be described to the user.
------------------------------
Date: Saturday, 22-Oct-83 01:14:37-GMT
From: Richard HPS (on ERCC DEC-10) <OKeefe.R.A @EDXA>
Subject: Purity, Rplaca, and Retract.
I really do wonder to what extent "rplaca" is USER-oriented
feature. If you want a structure with replaceable parts, you can
put atoms in at key points and set their top-level value. I've
known rplaca cause gross problems to novices. One, for example,
wrote a function something like
(defun pick (L fn)
(prog (x)
(setq x '(nil))
... when an element is found
... (tconc (car L) x) ...
(return (cdr x))
)
)
When I pointed out that all instances of x were going to get the
SAME list structure, and that he was going to get longer and longer
lists, he just couldn't believe it. He ran the function, and still
couldn't believe it. After I'd drawn a few boxes and waved my hands
for a bit, he managed to believe it, but it still didn't make sense
to him. Now of course there are functions that package up this whole
operation, in InterLisp it's (subset L fn), and THAT is user-oriented.
Similarly, setof, bagof, update are user-oriented in a way that
assert and retract are not. A "feature" can only be called
"user-oriented" if it helps people write correct programs, and to
do that people must understand it. I have very strong evidence that
assert and retract are hard to understand. Fernando Pereira cited
the fact that it can be very convenient to use asserta/assertz &
retract to implement a stack/queue, and gave a particular example.
But the Prolog-X system, designed by Bill Clocksin and Lawrence Byrd
(two people who should understand Prolog if anyone does) includes an
optimisation -- one which can produce ca 5-10% savings -- which
stops that example working. (In some but not all cases, and
debugging would disable the optimisation...)
I know how to implement assert and retract, and if I think hard
about it for a while I can usually spot the consequences of an
implementation. However, I have NO non-implementational understanding
of assert and retract which lets me decide whether another approach
is "correct". [This is not the case with rplaca/rplacd. You can
prove that the "invisible pointers" approach is correct.] What I
am looking for, and want to encourage other people to look for, is
operations which are currently coded using assert and
retract, but whose meaning is clear, and which may be
implementable directly.
setof, bagof, update, and assuming are operations of this sort.
It is quite straightforward to explain what they mean, and they
can be implemented more efficiently without using the database.
[sorry, without using assert/retract] Assert and retract introduce
very strong coupling between remote parts of the program, it is easy
to understand how that happens, but it is not easy to understand the
consequences.
Maybe the answer is to reintroduce the distinction between
data and code, just as for example OPS5 distinguishes between tokens
in working memory and rules.
I have been using Prolog for 4 years. It was three years before
I felt confident about hacking the data base, and I still hate doing
it because I know how slow it is and how unlikely I am to get it
right first time. My search for a replacement is not a search for
"purity". It is a search for a language I can USE.
[The mixed language approach, such as Poplog, doesn't solve
this problem. You can indeed move all the data base hacking into
pop11, but then you have to master TWO languages AND their
interaction. The problem gets bigger.]
------------------------------
End of PROLOG Digest
********************
∂26-Oct-83 1025 ELYSE@SU-SCORE.ARPA Visitor from Marks and Sparks
Received: from SU-SCORE by SU-AI with TCP/SMTP; 26 Oct 83 10:25:24 PDT
Date: Wed 26 Oct 83 10:22:52-PDT
From: Elyse Krupnick <ELYSE@SU-SCORE.ARPA>
Subject: Visitor from Marks and Sparks
To: faculty@SU-SCORE.ARPA
Stanford-Phone: (415) 497-9746
I had a call from the Office of International Visitors and they would like a
faculty member to volunteer to talk with Dr. Simon Gann who is interested in
programming productivity and systems maintenance. If you would like to meet
with him please call Maria Bun at x1984. He will be here Nov. 7 & 8.
-------
∂26-Oct-83 1338 @SRI-AI.ARPA:TW@SU-AI WHOOPS! Talkware seminar is in 380Y today, not Durand
Received: from SRI-AI by SU-AI with TCP/SMTP; 26 Oct 83 13:37:48 PDT
Received: from SU-AI.ARPA by SRI-AI.ARPA with TCP; Wed 26 Oct 83 13:35:52-PDT
Date: 26 Oct 83 1131 PDT
From: Terry Winograd <TW@SU-AI>
Subject: WHOOPS! Talkware seminar is in 380Y today, not Durand
To: "@377.DIS[1,TW]"@SU-AI
Due to miscommunication, I was told that the video would be shown in
Durand 401. That is where we pick up the equipment. It will be
shown in the regular classroom - 380Y at 2:15
∂26-Oct-83 1429 GOLUB@SU-SCORE.ARPA next meeting
Received: from SU-SCORE by SU-AI with TCP/SMTP; 26 Oct 83 14:29:17 PDT
Date: Wed 26 Oct 83 14:28:28-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: next meeting
To: CSD-Senior-Faculty: ;
The next meeting will take place on Tuesday, Nov 1 at 2:30
in MJH 252. There are many important issues to discuss.
GENE
-------
∂26-Oct-83 1614 LAWS@SRI-AI.ARPA AIList Digest V1 #82
Received: from SRI-AI by SU-AI with TCP/SMTP; 26 Oct 83 16:11:25 PDT
Date: Wednesday, October 26, 1983 10:31AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #82
To: AIList@SRI-AI
AIList Digest Wednesday, 26 Oct 1983 Volume 1 : Issue 82
Today's Topics:
AI Hardware - Dolphin-Users Distribution List,
AI Software - Inference Engine Toolkit for PCs,
Metaphysics - Parallelism and Conciousness,
Machine Learning - Readings,
Seminars - CSLI & Speech Understanding & Term Rewriting & SYDPOL Languages
----------------------------------------------------------------------
Date: Tue 25 Oct 83 11:56:44-PDT
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM.ARPA>
Subject: Dolphin-Users distribution list
If there are AIList readers who would like to discuss lisp machines
at a more detailed level than the credo of AIList calls for, let me alert them
to the existence of the Dolphin-Users@SUMEX distribution list. This list was
formed over a year ago to discuss problems with Xerox D machines, but it has
had very little traffic, and I'm sure few people would mind if other lisp
machines were discussed. If you would like your name added, please send a note
to Dolphin-Requests@SUMEX. If you would like to contribute or ask a question
about some lisp machine or problem, please do! --Christopher
------------------------------
Date: Wed 26 Oct 83 10:26:47-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Inference Engine Toolkit for PCs
I have been requested to pass on some product availability data to AIList.
I think I can do so without violating Arpanet regulations. I am
uncomfortable about such notices, however, and will generally require
that they pass through at least one "commercially disinterested" person
before being published in AIList. I will perform this screening only
in exceptional cases.
The product is a document on a backward-chaining inference engine
toolkit, including source code in FORTH. The inference engine uses
a production language syntax which allows semantic inference and
access to analytical subroutines written in FORTH. Source code is
included for a forward-chaining tool, but the strategy is not
implemented in the inference routines. The code is available on
disks formatted for a variety of personal computers. For further
details, contact Jack Park, Helion, Inc., Box 445, Brownsville, CA
95919, (916) 675-2478. The toolkit is also available from Mountain
View Press, Box 4656, Mountain View, CA 94040.
-- Ken Laws
------------------------------
Date: Tuesday, 25 October 1983, 10:28-EST
From: John Batali <Batali at MIT-OZ>
Subject: Parallelism and Conciousness
I'm interested in the reasons for the pairing of these two ideas. Does
anyone think that parallelism and consciousness necessarily have anything
to do with one another?
------------------------------
Date: Tue 25 Oct 83 12:22:45-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Parallelism and Consciousness
I cannot say that "parallelism and consciousness are necessarily
related", for one can (at least) simulate a parallel process on a
sequential machine. However, just because one has the ability to
represent a process in a certain form does not guarantee that this
is the most natural form to represent it in; e.g., FORTRAN and LISP
are theoretically as powerful, but who wants to program an expert
system in FORTRAN?
Top-down programming of knowledge is not (in my opinion) an
easy candidate for parallelism; one can hope for large
speed-ups of execution speed, but rarely are the algorithms
able to naturally utilize the ability of parallel systems to
support interacting non-deterministic processes. (I'm sure
I'll hear from some parallel logic programmer on that one).
My candidate for developing parallelism and consciousness involves
incorporating the non-determinism at the heart of the system, by
using a large number of subcognitive processes operating in
parallel; this is essentially Hofstadter's concept of consciousness
being an epiphenomenon of the interacting structures, and not being
explicitly programmed.
The reason for the parallelism is twofold. First, I would
assume that a system of interacting subcognitive structures would
have a significant amount of "random" effort, while the more
condensed logic based system would be more computationally more
efficient. Thus, the parallelism is partially used to offset the
added cost of the more fluid, random motion of the interacting
processes.
Second, the interacting processes would allow a natural interplay
between events based on time; for example, infinite loops are
easily avoided through having a process interrupt if too much
time is taken. The blackboard architecture is also naturally
represented in parallel, as a number of coordinating processes
scribble on a shared data structure. Actually, in my mind, the
blackboard structure has not been developed fully; I have the
image of people at a party in my mind, with groups forming,
ideas developed, groups breaking up and reforming. Many blackboards
are active at once, and as interest is forgotten, they dissolve,
then reform around other topics.
Notice that this representation of a party has no simple
sequential representation, nor would a simple top level rule
base be able to model the range of activities the party can evolve to.
How does "the party" decide what beer to buy, or how long to stay intact,
or whether it will be fun or not? If I were to model a party, I'd
say a parallel system of subcognitive structures would be almost
the only natural way.
As a final note, I find the vision of consciousness being
analogous to people at a party simple and humorous. And somehow,
I've always found God to clothe most truths in humor... am I the only
one who has laughed at the beautiful simplicity of E=MC↑2?
David
------------------------------
Date: 22 Oct 83 19:27:33 EDT (Sat)
From: Paul Torek <flink%umcp-cs@CSNet-Relay>
Subject: re: awareness
[Submitted by Robert.Frederkind@CMU-CS-SAD.]
[Robert:]
I think you've misunderstood my position. I don't deny the existence of
awareness (which I called, following Michael Condict, consciousness). It's
just that I don't see why you or anyone else don't accept that the physical
object known as your brain is all that is necessary for your awareness.
I also think you have illegitimately assumed that all physicalists must be
functionalists. A functionalist is someone who believes that the mind
consists in the information-processing features of the brain, and that it
doesn't matter what "hardware" is used, as long as the "software" is the
same there is the same awareness. On the other hand, one can be a
physicalist and still think that the hardware matters too -- that awareness
depends on the actual chemical properties of the brain, and not just the
type of "program" the brain instantiates.
You say that a robot is not aware because its information-storage system
amounts to *just* the states of certain bits of silicon. Functionalists
will object to your statement, I think, especially the word "just" (meaning
"merely"). I think the only reason one throws the word "just" into the
statement is because one already believes that the robot is unaware. That
begs the question completely.
Suppose you have a "soul", which is a wispy ghostlike thing inside your body
but undetectable. And this "soul" is made of "soul-stuff", let's call it.
Suppose we've decided that this "soul" is what explains your
intelligent-appearing and seemingly aware behavior. But then someone comes
along and says, "Nonsense, Robert is no more aware than a rock is, since we,
by using a different level of abstraction in thinking about it, can see that
his data-structure is *merely* the states of certain soul-stuff inside him."
What makes that statement any less cogent than yours concerning the robot?
So, I don't think dualism can provide any advantages in explaining why
experiences have a certain "feel" to them. And I don't see any problems
with the idea that the "feel" of an experience is caused by, or is identical
with, or is one aspect of, (I haven't decided which yet), certain brain
processes.
--Paul Torek, umcp-cs!flink
------------------------------
Date: Monday, 24 October 1983 15:31:13 EDT
From: Robert.Frederking@CMU-CS-CAD
Subject: Re: awareness
Sorry about not noticing the functionalist/physicalist
distinction. Most of the people that I've discussed this with were either
functionalists or dualists.
The physicalist position doesn't bother me nearly as much as the
functionalist one. The question seems to be whether awareness is a function
of physical properties, or something that just happens to be associated with
human brains -- that is, whether it's a necessary property of the physical
structure of functioning brains. For example, the idea that your "soul" is
"inside your body" is a little strange to me -- I tend to think of it as
being similar to the idea of hyperdimensional mathematics, so that a person's
"soul" might exist outside the dimensions we can sense, but communicate with
their body. I think that physicalism is a reasonable hypothesis, but the
differences are not experimentally verifiable, and dualism seems more
reasonable to me.
As far as the functionalist counter-argument to mine would go, the
way you phrased it implies that I think that the "soul" explains human
behavior. Actually, I think that *all* human behavior can be modeled by
physical systems like robots. I suspect that we'll find physical correlates
to all the information processing behavior we see. The thing I am
describing is the internal experience. A functionalist certainly could make
the counter-argument, but the thing that I believe to be important in this
discussion is exactly the question of whether the "soul" is intrinsically
part of the body, or whether it's made of "soul-stuff", not necessarily
"located" in the body (if "souls" have locations), but communicating with
it. As I implied in my previous post, I am concerned with the eventual
legal and ethical implications of taking a functionalist point of view.
So I guess I'm saying that I prefer either physicalism or dualism to
functionalism, due to the side-effects that will occur eventually, and that
to me dualism appears the most intuitively correct, although I don't think
anyone can prove any of the positions.
------------------------------
Date: 24 Oct 1983 13:58:10-EDT
From: Paul.Rosenbloom at CMU-CS-H
Subject: ML Readings
[Reprinted from the CMU-AI bboard.]
The suggested readings for this Thursday's meeting of the machine learning
course -- on chunking and macro-operators -- are: "Learning and executing
generalized robot plans" by Fikes, Hart, and Nilsson (AIJ 1972); "Knowledge
compilation: The general learning mechanism" by Anderson (proceedings of the
1983 machine learning workshop); and "The chunking of goal hierarchies: A
generalized model of practice" by Rosenbloom and Newell (also in the
proceedings of the 1983 machine learning workshop). These readings are now
(or will be shortly) on reserve in the E&S library.
------------------------------
Date: Mon 24 Oct 83 20:09:30-PDT
From: Doug Lenat <LENAT@SU-SCORE.ARPA>
Subject: CS Colloq 10/25 Terry Winograd & Brian Smith
[Reprinted from the SU-Score bboard. Sorry this one is late,
but it still may be valuable as the first mention of CSLI on
AIList. -- KIL]
CS Colloquium, Tuesday, Octobe 25, 4:15 Terman Auditorium
Terry Winograd (CSD) and Brian Smith (Xerox PARC)
Introducing the Center for the Study of Language and Information
This summer a new institute was created at Stanford, made up of
researchers from Stanford, SRI, Xerox, and Fairchild working in the study
of languages, both natural and formal. Participants from Stanford will
include faculty, students and research staff from the departments of
Computer Science, Linguistics, and Philosophy. We will briefly describe
the structure of the institute, and will present at some length the
intellectual vision on which it is based and the content of the current
research projects.
------------------------------
Date: 23 Oct 1983 22:14:30-EDT
From: Gary.Bradshaw at CMU-RI-ISL1
Subject: Dissertation defense
[Reprinted from the CMU-AI bboard.]
I am giving my dissertation defense on Monday, October 31 at 8:30 a.m.
in Baker Hall 336b. Committee members: Herbert Simon (chair),
Raj Reddy, John Anderson, and Brian MacWhinney. The following is the
talk abstract:
LEARNING TO UNDERSTAND SPEECH SOUNDS:
A THEORY AND MODEL
Gary L. Bradshaw
Current theories of speech perception postulate a set of innate
feature detectors that derive a phonemic analysis of speech, even though a
large number of empirical tests are inconsistent with the feature detector
hypothesis. I will briefly describe feature detector theory and the
evidence against it, and will then present an alternative learning theory of
speech perception. The talk will conclude with a description of a
computer implementation of the theory, along with learning and performance
data for the system.
------------------------------
Date: 25 Oct 1983 1510-PDT
From: GOGUEN at SRI-CSL
Subject: rewrite rule seminar
TENTATIVE PROGRAM FOR TERM REWRITING SEMINAR
--------------------------------------------
FIRST TALK:
27 October 1983, Thursday, 3:30-5pm, Jean-Pierre Jouannaud,
Room EL381, SRI
This first talk will be an overview: basic mechanisms, solved & unsolved
problems, and main applications of term rewriting systems.
We will survey the literature, also indicating the most important results
and open problems, for the following topics:
1. definition of rewriting
2. termination
3. For non-terminating rewritings: Church-Rosser properties, Sound computing
strategies, Optimal computing strategies
4. For terminating rewritings: Church-Rosser properties, completion
algorithm, inductive completion algorithm, narrowing process
Three kind of term rewriting will be discussed: Term Rewriting
Systems (TRS), Equational Term Rewriting Systems (ETRS) and Conditional Term
Rewriting Systems (CTRS).
--------------------------------------------------
Succeeding talks should be more technical. The accompanying bibliographical
citations suggest important and readible references for each topic. Do we
have any volunteers for presenting these topics?
---------------------------------------------------
Second talk, details of terminating TRS:
Knuth and Bendix; Dershowitz TCS; Jouannaud; Lescanne & Reinig,
Formalization of Programming Concepts, Garmisch; Huet JACM; Huet JCSS; Huet
& Hullot JACM; Fay CADE 78; Hullot CADE 80; Goguen CADE 80.
Third and fourth talk, details of terminating ETRS:
Jouannaud & Munoz draft; Huet JACM; Lankford & Ballantine draft; Peterson &
Stickel JACM; Jouannaud & Kirchner POPL; Kirchner draft; Jouannaud, Kirchner
& Kirchner ICALP.
Fifth talk, details of turning the Knuth-Bendix completion procedure into a
complete refutational procedure for first order built in theories, with
applications to PROLOG:
Hsiang thesis; Hsiang & Dershowitz ICALP; Dershowitz draft "Computing
with TRW".
Sixth and seventh talks, non-terminating TRS and CTRS:
O'Donnel LNCS; Huet & Levy draft; Pletat, Engels and Ehrich draft; Bergstra
& Klop draft.
Eighth talk, terminating CTRS:
Remy thesis.
(More time may be needed for some talks.)
------------------------------
Date: 25 Oct 83 1407 PDT
From: Terry Winograd <TW@SU-AI>
Subject: next week's talkware - Nov 1 TUESDAY - K. Nygaard
[Reprinted from the SU-SCORE bboard.]
Date: Tuesday, Nov 1 *** NOTE ONE-TIME CHANGE OF DATE AND TIME ***
Speaker: Kristen Nygaard (University of Oslo and Norwegian Computing Center)
Topic: SYDPOL: System Development and Profession-Oriented Languages
Time: 1:15-2:30
Place: Poly Sci Bldg. Room 268. ***NOTE NONSTANDARD PLACE***
A new project involving several universities and research centers in three
Scandinavian countries has been establihed to create new methods of system
development, using profession-oriented languages. They will design
computer-based systems that will operate in work associated with
professions (the initial application is in hospitals), focussing on the
problem of facilitating cooperative work among professionals. One aspect
of the research is the development of formal languages for describing the
domains of interest and providing an interlingua for the systems and for
the people who use them. This talk will focus on the language-design
research, its goals and methods.
------------------------------
End of AIList Digest
********************
∂26-Oct-83 1637 @SRI-AI.ARPA:desRivieres.PA@PARC-MAXC.ARPA 2 PM Computer Languages Seminar CANCELLED tomorrow
Received: from SRI-AI by SU-AI with TCP/SMTP; 26 Oct 83 16:37:30 PDT
Received: from PARC-MAXC.ARPA by SRI-AI.ARPA with TCP; Wed 26 Oct 83 16:34:12-PDT
Date: Wed, 26 Oct 83 16:31 PDT
From: desRivieres.PA@PARC-MAXC.ARPA
Subject: 2 PM Computer Languages Seminar CANCELLED tomorrow
To: csli-friends@sri-ai.ARPA
Reply-to: desRivieres.PA@PARC-MAXC.ARPA
This Thursday's 2 pm. seminar on computer languages has been cancelled.
We hope to reschedule Peter Deutsch's talk at a later date.
∂26-Oct-83 1638 @MIT-MC:MAREK%MIT-OZ@MIT-MC Re: Parallelism and Consciousness
Received: from MIT-MC by SU-AI with TCP/SMTP; 26 Oct 83 16:38:38 PDT
Date: Wed 26 Oct 83 19:29:29-EDT
From: MAREK%MIT-OZ@MIT-MC.ARPA
Subject: Re: Parallelism and Consciousness
To: BUCKLEY%MIT-OZ@MIT-MC.ARPA
cc: self-org%MIT-OZ@MIT-MC.ARPA, phil-sci@MIT-MC, marek%MIT-OZ@MIT-MC.ARPA
In-Reply-To: Message from "BUCKLEY@MIT-OZ" of Wed 26 Oct 83 18:37:47-EDT
-- of what relevance is the concept of algorithm to artificial
intelligence, or if you will, to computation? is it necessary?
In the computation theoretic sense, an algorithm is a failsafe way
of computing something. Thus, it is a necessary condition of computation.
Am I parsing this correctly?: "[A. is a] failsafe way of computing something"
implies "[A. is a] necessary condition of computation"? Non sequitur...
'Am amused at how easy it was to dismiss one of the most intriguing
questions of the ones posed. 'D be greateful for a bit more "computation"
on the next pass, as with the other ones...
-- Marek Lugowski
-------
∂26-Oct-83 1905 DKANERVA@SRI-AI.ARPA Newsletter No. 6, October 27, 1983
Received: from SRI-AI by SU-AI with TCP/SMTP; 26 Oct 83 19:04:26 PDT
Date: Wed 26 Oct 83 19:00:04-PDT
From: DKANERVA@SRI-AI.ARPA
Subject: Newsletter No. 6, October 27, 1983
To: csli-friends@SRI-AI.ARPA
...............
!
CSLI Newsletter
October 27, 1983 * * * Number 6
We've had a very gratifying response to the CSLI Newsletter,
especially in that it represents considerable interest in the
activities of the Center. To serve our readers well, we need to
announce events in time for people to fit them into their plans. So
I'd like to remind organizers of CSLI activities and other related
events to get their announcements in to me at least one week ahead, if
at all possible--by Wednesday noon, at the very latest.
People who would like to be added to the Newsletter distribution
list can send me a message at <DKanerva@SRI-AI> or Ventura Hall,
Stanford, CA 94305.
- Dianne Kanerva
* * * * * * *
C1 WORKING GROUP
Semantics of Computer Languages
On October 25, Jon Barwise gave what was intended to be a
logician's overview of formal approaches to the semantics of
programming languages: axiomatic, operational, and denotational.
Other logicians present did not share his view of the inadequacy of
the operational account. Interesting issues were raised, if not
settled.
Tuesday, November 1: Brian Smith will speak on
"Semantic Issues Surrounding LISP"
Tuesday, November 8: Carolyn Talcott will talk about
the results of her thesis on
the semantics of LISP-like languages.
* * * * * * *
MEETING FOR PROJECTS B1 AND D4
This Friday, October 28, there will be a meeting of projects B1
(Extending Semantics Theories) and D4 (The Commonsense World) in the
Ventura Hall Seminar Room from 3:30 until 5:00. Jens-Erik Fenstad
will talk about work by Jan Tore Loenning on Mass Terms and
Quantification.
* * * * * * *
CSLI IMAGEN PRINTER
The new CSLI Imagen printer has been delivered and will be
installed this week or early next week in Ventura Hall, room 7.
* * * * * * *
! Page 2
* * * * * * *
CSLI SCHEDULE FOR *THIS* THURSDAY, October 27, 1983
10:00 Research Seminar on Natural Language
Speaker: Ray Perrault (CSLI-SRI)
Title: "Speech Acts and Plans"
Place: Redwood Hall, room G-19
12:00 TINLunch
Discussion leader: Jeffrey S. Rosenschein
Paper for discussion: "Synchronization of Multi-Agent Plans"
by Jeffrey S. Rosenschein.
Place: Ventura Hall
2:00 Research Seminar on Computer Languages *CANCELLED TODAY*
This Thursday's seminar on computer languages has been cancelled.
We hope to reschedule at a later date Peter Deutsch's talk on
Smalltalk-80.
3:30 Tea
Place: Ventura Hall
4:15 Colloquium
Speaker: Jay M. Tenenbaum (Fairchild AI Lab)
Title: "A.I. Research at Fairchild"
Place: Redwood Hall, room G-19
Note to visitors:
Redwood Hall is close to Ventura Hall on the Stanford Campus. It
can be reached from Campus Drive or Panama Street. From Campus Drive
follow the sign for Jordan Quad. Parking is in the C-lot between
Ventura and Jordan Quad.
* * * * * * *
! Page 3
* * * * * * *
CSLI SCHEDULE FOR *NEXT* THURSDAY, NOVEMBER 3, 1983
10:00 Research Seminar on Natural Language
Speaker: Ivan Sag (HP-CSLI)
Topic: Issues in Generalized Phrase Structure Grammars.
Place: Redwood Hall, room G-19
12:00 TINLunch
Discussion leader: Ron Kaplan (CSLI-Xerox)
Paper for discussion: "How are grammars represented?"
by Edward Stabler,
BBS 6, pp. 391-421, 1983.
Place: Ventura Hall
2:00 Research Seminar on Computer Languages
Speaker: Carolyn Talcott (Stanford)
Title: "Symbolic computation--A view of LISP and
related systems"
Place: Redwood Hall, room G-19
3:30 Tea
Place: Ventura Hall
4:15 Colloquium
Speaker: Glynn Winskel (CMU)
Title: "Denotational Semantics: An Introduction"
Place: Redwood Hall, room G-19
Note to visitors:
Redwood Hall is close to Ventura Hall on the Stanford Campus. It
can be reached from Campus Drive or Panama Street. From Campus Drive
follow the sign for Jordan Quad. Parking is in the C-lot between
Ventura and Jordan Quad.
* * * * * * *
MAIL SLOT FOR STUDENT RESEARCH ASSISTANTS
Student research assistants may check their mail in the mail slot
labeled "SRA" in Ventura Hall. Once the mail slots in Casita have
been set up, mail will be distributed there for the people with
offices in Casita and for CSLI students.
* * * * * * *
! Page 4
* * * * * * *
TINLUNCH SCHEDULE
TINLunch will be held at 12:00 on Thursday, October 27, 1983, at
Ventura Hall Stanford University. Michael Georgeff will lead the
discussion. The paper for discussion will be
SYNCHRONIZATION OF MULTI-AGENT PLANS
Jeffrey S. ROSENSCHEIN
NOTE: The author will be present on October 27 for TINLUNCH.
TINLunch will be held each Thursday at Ventura Hall on the
Stanford University campus as a part of CSLI activities. Copies of
TINLunch papers will be at SRI in EJ251 and at Stanford University in
Ventura Hall.
NEXT WEEK: "How Are Grammars Represented?"
by
Edward Stabler
(THE BEHAVIORAL AND BRAIN SCIENCES,
Vol. 6, pp. 391-421, 1983)
SCHEDULE:
October 27 Michael Georgeff
November 3 Ron Kaplan
November 10 Martin Kay
November 17 Jerry Hobbs
November 24 THANKSGIVING
* * * * * * *
SEMINAR ON APPROACHES TO COMPUTER LANGUAGES, NOVEMBER 3
Carolyn Talcott of Stanford will present the Approaches to
Computer Languages Seminar on November 3 at 2:00 p.m. in Redwood
Auditorium. Her topic will be "Symbolic computation--A view of LISP
and related systems."
Abstract:
LISP is widely used as the basic computing environment in the AI
community, and is becoming more widely available. LISP is in fact one
of the older high level languages. It has persisted while many others
have come and gone. In this talk we will consider the following
questions: What is LISP? What makes LISP great? How might it be
better? How to can we better understand (provide simple semantics
for) current programming practice in LISP?
We will begin with a brief introduction to LISP--its data
structures, programs, and interactive environment. (LISP is more than
a language!) We will then discuss some of the important features of
LISP that distinguish it from other languages. Some variations,
alternatives, and improvements to LISP will be suggested. Finally, we
will present some ideas about how to understand, design, and develop
systems for symbolic computation.
* * * * * * *
! Page 5
* * * * * * *
LISP AS LANGUAGE COURSE, WINTER QUARTER
Below is an announcement of the Lisp as Language seminar that I
will be teaching next quarter. We will probably aim for two
one-and-a-quarter hour class sessions per week, plus a three-hour
session once a week for programming help. Time, place, etc., are all
to be determined.
If you think you will be interested in taking the course, please
let me know, so that I can plan for a classroom, machines, TA's, etc.
- Brian Smith
-- LISP as Language --
A systematic introduction to the concepts and practices of
programming, based on a simple reconstructed dialect of LISP. The aim
is to make explicit the knowledge that nights and weekends of
programming make implicit. The material will be presented under a
"linguistic reconstruction," using vocabulary that should be of use in
studying any linguistic system. Although intended primarily for
linguists, philosophers, and mathematicians, anyone interested in
computation is welcome.
Although no previous exposure to computation is required, we will
aim for rigorous analyses. Familiarity with at least some formal
system is therefore essential. Participants will be provided with
tutorial programming instruction.
Topics to be covered include:
- Procedural and data abstraction
- Objects, modularity, state, and encapsulation
- Input/output, notation, and communication protocols
- Metalinguistic abstraction, and problems of intensional grain
- Self-reference, metacircular interpreters, and reflection
Throughout the course, we will pay particular attention to the
following themes:
- Procedural and declarative notions of semantics
- Interpretation, compilation, and other models of processing
- Architecture, implementation, and abstract machines
- Implicit vs. explicit representation of information
- Contextual relativity, scoping mechanisms, and locality
The course will be based in part on the "Structure and
Interpretation of Computer Programs" textbook, by Abelson and Sussman,
that has been used at M.I.T., although the linguistic orientation will
affect our dialects and terminology.
* * * * * * *
! Page 6
* * * * * * *
WHY CONTEXT WON'T GO AWAY - Fifth Meeting
Tuesday, Nov. 1, Ventura Hall, 3:15
"How to Bridge the Gap Between Meaning and Reference"
GUEST SPEAKER: Professor Howard Wettstein
Professor Howard Wettstein, who will be visiting CSLI next week,
has done his major work on the semantics of singular reference with
particular emphasis on indexical expressions and definite
descriptions.
* * * * * * *
TALKWARE SEMINAR - CS 377
This week's talkware seminar (Oct. 26) was by Greg Nelson (Xerox
PARC), speaking on "JUNO: A constraint-based language for graphics."
Abstract: Juno is an interactive, programmable, constraint-based
system for producing graphic images, such as typographic artwork or
technical illustrations. The user of the system can choose between
"drawing" an image on a raster display, constructing the image by the
execution of a Juno program, or using these modes in combination.
Both modes are "constraint-oriented"; that is, instead of giving
explicit coordinates for each control point of the image, the user can
position a point by specifying the geometric relation between it and
other control points. Juno is under development at Xerox PARC. A
videotape of the program is shown.
NEXT WEEK'S TALKWARE SEMINAR:
*** NOTE ONE-TIME CHANGE OF DATE, TIME, AND PLACE ***
Date: Tuesday, Nov. 1
Speaker: Kristen Nygaard, University of Oslo and Norwegian Computing Center
Topic: SYDPOL: System Development and Profession-Oriented Languages
Time: 1:15-2:30
Place: Political Science Bldg., Room 268.
Abstract: A new project involving several universities and
research centers in three Scandinavian countries has been established
to create new methods of system development, using profession-oriented
languages. They will design computer-based systems that will operate
in work associated with professions (the initial application is in
hospitals), focusing on the problem of facilitating cooperative work
among professionals. One aspect of the research is the development of
formal languages for describing the domains of interest and providing
an interlingua for the systems and for the people who use them. This
talk will focus on the language-design research, its goals and
methods.
* * * * * * *
! Page 7
* * * * * * *
TENTATIVE PROGRAM FOR TERM REWRITING SEMINAR
FIRST TALK: Thursday, October 27, 1983, 3:30-5 p.m., Room EL381, SRI
Jean-Pierre Jouannaud
This first talk will be an overview: basic mechanisms, solved and
unsolved problems, and main applications of term rewriting systems.
We will survey the literature, also indicating the most important
results and open problems, for the following topics:
1. Definition of rewriting
2. Termination
3. For nonterminating rewritings: Church-Rosser properties,
sound computing strategies, optimal computing strategies
4. For terminating rewritings: Church-Rosser properties, completion
algorithm, inductive completion algorithm, narrowing process
Three kinds of term rewriting will be discussed: Term Rewriting
Systems (TRS), Equational Term Rewriting Systems (ETRS), and
Conditional Term Rewriting Systems (CTRS).
The succeeding talks should be more technical. The tentative
schedule is given below.
SECOND TALK: Details of terminating TRS
THIRD and FOURTH TALKS: Details of terminating ETRS
FIFTH TALK: Details of turning the Knuth-Bendix completion procedure
into a complete refutational procedure for first-order, built-in
theories, with applications to PROLOG
SIXTH and SEVENTH TALKS: Nonterminating TRS and CTRS
EIGHTH TALK: Terminating CTRS
* * * * * * *
COMPUTER SCIENCE COLLOQUIUM ON CSLI
On Tuesday, October 25, Professor Terry Winograd of Stanford and
Dr. Brian Smith of Xerox PARC spoke at a colloquium in Terman
Auditorium introducing the Center for the Study of Language and
Information.
* * * * * * *
SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
On Wednesday, October 26, Professor Sol Feferman spoke at the
weekly Seminar in Logic and Foundations of Mathematics on "Reverse
Mathematics." The talk introduced and surveyed work by Friedman,
Simpson, and others, providing sharp information in the form of
equivalences as to which set-existence axioms are needed to prove
various statements in analysis and algebra.
This seminar is held Wednesdays, 4:15-5:30 p.m., in the Stanford
Mathematics Dept. Faculty Lounge (383-N).
* * * * * * *
! Page 8
VENTURA/CSLI PHONE LIST - 10/20/83
NAME ARPA NAME ROOM SU PHONE COM. OTHER PHONE
HOME PHONE
Almog, Joseph 32 7-3195 961-8047 (HM)
Barwise, Jon KJB@sri-ai 20 7-0110 20 857-0110 (HM)
Batema, Leslie LB@sri-ai 25 7-0939 25 58/345-3840 (HM)
Bresnan, Joan BRESNAN@sri-ai 26 7-0144 26 494-4314 (PARC)
BRESNAN@parc 851-1670 (HM)
Dutcher, Rich RICH@sri-ai 25 7-0939 25 55/543-5114 (HM)
Firstenberger,Joyce JOYCE@sri-ai 16 7-1563 3 854-7475 (HM)
Grosz, Barbara GROSZ@sri-ai 27 7-1202 27 859-4839 (SRI)
322-1522 (HM)
Macken, Betsy BMACKEN@sri-ai 24 7-1224 24 493-2599 (HM)
McConnel-Riggs,Sandy RIGGS@sri-ai 25 7-0939 25 327-9449 (HM)
Pease, Emma EMMA@sri-ai 25 7-0939 25 857-1472 (HM)
Perry, John JRP@sri-ai 28 7-1275 28 327-0649 (HM)
Peters, Stanley PETERS@sri-ai 29 7-2212 29 328-9779 (HM)
Printer Room 7 7-0628 8
Shaw, Marguerite 35 7-3111 5 948-3138 (HM)
Smith, Brian BRIANSMITH@parc 27 7-1710 27 494-4336 (PARC)
857-1686 (HM)
Suppes, Pat 36 7-3111 6 321-6594 (HM)
Reading Room 6 7-4924
Receptionist Lobby 7-0628 7
Tran, Bach-Hong BACH-HONG@sri-ai 14 7-1249 4 961-7085 (HM)
Wunderman, Pat WUNDERMAN@sri-ai 23 7-1131 23 968-6818 (HM)
CASITA/CSLI
Fenstad, Jens-Erik 49 7-3474
Ford, Marilyn 43 7-4408
Gardenfors, Peter 41 7-9196
Kanerva, Dianne DKANERVA@sri-ai 40 7-1712 327-8594 (HM)
Nissenbaum, Helen 41 7-9196
Ostrom, Eric 42 7-2607
Strand, Bjorn 50 7-2137
-------
∂27-Oct-83 0859 @SU-SCORE.ARPA:OR.STEIN@SU-SIERRA.ARPA Re: Newsletter
Received: from SU-SCORE by SU-AI with TCP/SMTP; 27 Oct 83 08:58:53 PDT
Received: from SU-SIERRA.ARPA by SU-SCORE.ARPA with TCP; Thu 27 Oct 83 08:58:07-PDT
Date: Thu 27 Oct 83 08:58:14-PDT
From: Gail Stein <OR.STEIN@SU-SIERRA.ARPA>
Subject: Re: Newsletter
To: ELYSE@SU-SCORE.ARPA, faculty@SU-SCORE.ARPA
cc: OR.STEIN@SU-SIERRA.ARPA
In-Reply-To: Message from "Elyse Krupnick <ELYSE@SU-SCORE.ARPA>" of Tue 25 Oct 83 15:55:10-PDT
Professor Dantzig has received the following honors during 1983:
Honorary Member, IEEE, January 1983
Doctorate Honoris Causa, The Universite Catholique de Louvain,
Faculte des Sciences Appliquees, Louvaine-la-Neuve, Belgium, Feb 1983.
Honorary Doctor of Science, Columbia University, 1983
Honorary Doctorate Degree in Economics, Faculty of Law and Economics,
University of Zurich, Switzerland, 1983.
Let me know if you need any additional information. -- Gail
-------
∂27-Oct-83 1448 @SU-SCORE.ARPA:YM@SU-AI Town Meetings
Received: from SU-SCORE by SU-AI with TCP/SMTP; 27 Oct 83 14:48:33 PDT
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Thu 27 Oct 83 14:46:56-PDT
Date: 27 Oct 83 1445 PDT
From: Yoni Malachi <YM@SU-AI>
Subject: Town Meetings
To: students@SU-SCORE, faculty@SU-SCORE
CC: bosack@SU-SCORE, ME@SU-AI, mrc@SU-SCORE,
reges@SU-SCORE, REG@SU-AI, LMG@SU-AI, BS@SU-AI, MEW@SU-AI
Reply-To: ym@sail,patashnik@score
A town meeting with Gene will take place on Nov 11 at noon in room 420-041.
Send us items for the agenda or come and bring them with you.
A CSD-CF town meeting will be held at 12-2pm on Wed Nov. 16th, 1983.
We'll try to discuss the most important issues between 12:20 and 1:15 so those
who have classes ending 12:15 or starting 1:15 can be there.
The place is once again room 420-041.
Oren & Yoni, bureaucrats
∂27-Oct-83 1859 LAWS@SRI-AI.ARPA AIList Digest V1 #83
Received: from SRI-AI by SU-AI with TCP/SMTP; 27 Oct 83 18:58:30 PDT
Date: Thursday, October 27, 1983 2:53PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #83
To: AIList@SRI-AI
AIList Digest Friday, 28 Oct 1983 Volume 1 : Issue 83
Today's Topics:
AI Jargon - Definitions,
Unification - Request,
Rational Psychology - Definition,
Conferences - Computers and the Law & FORTH Proceedings,
Seminars - AI at ADL & Theorem Proving
----------------------------------------------------------------------
Date: 26 October 1983 1048-PDT (Wednesday)
From: abbott at AEROSPACE (Russ Abbott)
Subject: Definitions of AI Terms
The IEEE is in the process of preparing a dictionary of computer terms.
Included will be AI-related terms. Does anyone know of existing sets of
definitions?
In future messages I expect to circulate draft definitions for comment.
------------------------------
Date: 26 Oct 83 16:46:09 EDT (Wed)
From: decvax!duke!unc!bts@Berkeley
Subject: Unification
Ken,
I posted this to USENET a week ago. Since it hasn't shown
up in the AIList, I suspect that it didn't make it to SRI [...].
[Correct, we must have a faulty connection. -- KIL]
Bruce
P.S. As an astute USENET reader pointed out, I perhaps should have said
that a unifier makes the terms "syntactically equal". I thought it
was clear from context.
=====================================================================
From: unc!bts (Bruce Smith)
Newsgroups: net.ai
Title: Unification Query
Article-I.D.: unc.6030
Posted: Wed Oct 19 01:23:46 1983
Received: Wed Oct 19 01:23:46 1983
I'm interested in anything new on unification algo-
rithms. In case some readers don't know what I'm talking
about, I'll give a short description of the problem and some
references I know of. Experts-- the ones I'm really
interested in reaching-- may skip to the last paragraph.
Given a set of terms (in some language) containing
variables, the unification problem is to find a 'unifier',
that is, a substitution for the variables in those terms
which would make the terms equal. Moreover, the unifier
should be a 'most general unifier', that is, any other unif-
iers should be extensions of it. Resolution theorem-provers
and logic programming languages like Prolog depend on
unification-- though the Prolog implementations I'm familiar
with "cheat". (See Clocksin and Mellish's "Programming in
Prolog", p. 219.)
Unification seems to be a very active topic. The paper
"A short survey on the state of the art in matching and
unification problems", by Raulefs, Siekmann, Szabo and
Unvericht, in the May 1979 issue of the SIGSAM Bulletin,
contains a bibliography of over 90 articles. And, "An effi-
cient unification algorithm", by Martelli and Montanari, in
the April 1982 ACM Transactions on Programming Languages and
Systems, gives a (very readable) discussion of the effi-
ciency of various unification algorithms. A programming
language has even been based on unification: "Uniform-- A
language based on unification which unifies (much of) Lisp,
Prolog and Act1" by Kahn in IJCAI-81.
So, does anyone out there in network-land have a unifi-
cation bibliography more recent that 1979? If it's on-line,
would you please post it to USENET's net.ai? If not, where
can we get a copy?
Bruce Smith, UNC-Chapel Hill
decvax!duke!unc!bts (USENET)
bts.unc@udel-relay (other NETworks)
------------------------------
Date: Wednesday, 26-Oct-83 18:42:21-GMT
From: RICHARD HPS (on ERCC DEC-10) <okeefe.r.a.@edxa>
Reply-to: okeefe.r.a. <okeefe.r.a.%edxa@ucl-cs>
Subject: Rational Psychology
If you were thinking of saying something about "Rational Psychology"
and haven't read the article, PLEASE restrain yourself. It appeared in
Volume 4 Issue 3 (Autumn 83) of "The AI Magazine", and is pages 50-54 of
that issue. It isn't hard to get AI Magazine. AAAI members get it. I'm
not a member, but DAI Edinburgh has a subscription and I read it in the
library. I am almost tempted to join AAAI for the AI magazine alone, it
is good value.
The "Rational" in Rational Psychology modifies Psychology the same
way Rational modifies Mechanics in Rational Mechanics or Thermodynamics
in Rational Thermodynamics. It does NOT contrast with "the psychology
of emotion" but with Experimental Psychology or Human Psychology. Here
is a paragraph from the paper in question:
" The aim of rational psychology is understanding, just as in any
other branch of mathematics. Where much of what is labelled "mathematical
psychology" consists of microscopic mathematical problems arising in the
non-mathematical prosecution of human psychology, or in the exposition of
informal theories with invented symbols substituting for equally precise
words, rational psychology seeks to understand the structure of
psychological concepts and theories by means of the most fit mathematical
concepts and strict proofs, by suspiciously analyzing the informally
developed notions to reveal their essence and structure, to allow debate
on their interpretation to be phrased precisely, with consequences of
choices seen mathematically. The aim is not simply to further informal
psychology, but to understand it instead, not necessarily to solve
problems as stated, but to see if they are proper problems at all by
investigating their formulations. "
There is nothing in this, or any other part of the paper, that would
exclude the study of emotions from Rational Psychology. Indeed, unless or
until we encounter another intelligent race, Rational Psychology seems to
offer the only way to telling whether there are emotions that human beings
cannot experience.
My only criticism of Doyle's programme (note spelling, I am not
talking about a computer program) is that I think we are as close to a
useful Rational Psychology as Galileo was to Rational Mechanics or Carnot
was to Rational Thermodynamics. I hope other people disagree with me and
get cracking on it. Any progress at all in this area would be useful.
------------------------------
Date: Thu, 27 Oct 83 07:50:56 pdt
From: ihnp4!utcsrgv!dave@Berkeley
Subject: Computers and the Law
Dalhousie University is sponsoring a computer conference under
CONFER on an MTS system at Wayne State University in Michigan.
The people in the conference include lawyers interested in computers
as well as computer science types interested in law.
Topics of discussion include computer applications to law, legal issues
such as patents, copyrights and trade secrets in the context of computers,
CAI in legal education, and AI in law.
For those who aren't familiar with Confer, it provides a medium which
is somewhat more structured than Usenet for discussions. People post
"items", and "discussion responses" are grouped chronologically (and
kept forever) under the item. All of the files are on one machine only.
The conference is just starting up. Dalhousie has obtained a grant to
fund everyone's participation, which means anyone who is interested
can join for free. Access is through Telenet or Datapac, and the
collect charges are picked up by the grant.
If anyone is interested in joining this conference (called Law:Forum),
please drop me a line.
Dave Sherman
The Law Society of Upper Canada
Osgoode Hall
Toronto, Ont.
Canada M5H 2N6
(416) 947-3466
decvax!utzoo!utcsrgv!dave@BERKELEY (ARPA)
{ihnp4,cornell,floyd,utzoo} !utcsrgv!dave (UUCP)
------------------------------
Date: Thu 27 Oct 83 10:22:48-PDT
From: WYLAND@SRI-KL.ARPA
Subject: FORTH Convention Proceedings
I have been told that there will be no formal proceedings of the
FORTH convention, but that articles will appear in "FORTH
Dimensions", the magazine/journal of the FORTH Interest Group.
This journal publishes technical articles about FORTH methods and
techniques, algorithms, applications, and standards. It is
available for $15.00/year from the following address:
FORTH Interest Group
P.O. Box 1105
San Carlos, CA 94070
415-962-8653
As you may know, Mountain View Press carries most of the
available literature for FORTH, including the proceedings of the
various technical conferences such as the FORTH Application
Conferences at the University of Rochester and the FORML
conferences. I highly reccommend them as a source of FORTH
literature. Their address is:
Mountain View Press, Inc.
P.O. Box 4656
Mountain View, CA 94040
415-961-4103
I hope this helps.
Dave Wyland
WYLAND@SRI
------------------------------
Date: Wednesday, 26 October 1983 14:55 edt
From: TJMartin.ADL@MIT-MULTICS.ARPA (Thomas J. Martin)
Subject: Seminar Announcement
PLACE: Arthur D. Little, Inc.
Acorn Park (off Rte. 2 near Rte. 2/Rte. 16 rotary)
Cambridge MA
DATE: October 31, 1983
TIME: 8:45 AM, ADL Auditorium
TOPIC: "Artificial Intelligence at ADL -- Activities, Progress, and Plans"
SPEAKER: Dr. Karl M. Wiig, Director of ADL AI Program
ABSTRACT: ADL'ss AI program has been underway for four months. A core group
of staff has been recruited from several sections in the company
and trained. Symbolics 3600 and Xerox 1100 machines have been
installed and are now operational.
The seminar will discuss research in progress at ADL in:
expert systems, natural language, and knowledge engineering tools.
------------------------------
Date: Wed 26 Oct 83 20:11:52-PDT
From: Doug Lenat <LENAT@SU-SCORE.ARPA>
Subject: CS Colloq, Tues 11/1 Jussi Ketonen
[Reprinted from the SU-SCORE bboard.]
CS Colloquium, Tuesday, November 1, 4:15pm Terman Auditorium
(refreshments at 3:45 at the 3rd floor lounge of MJH)
SPEAKER: Dr. Jussi Ketonen, Stanford University CS Department
TITLE: A VIEW OF THEOREM-PROVING
I'll be discussing the possibility of developing powerful
expert systems for mathematical reasoning - a domain characterized by
highly abbreviated symbolic manipulations whose logical complexity
tends to be rather low. Of particular interest will be the proper
role of meta theory, high-order logic, logical decision procedures,
and rewriting. I will argue for a different, though equally
important, role for the widely misunderstood notion of meta theory.
Most of the discussion takes place in the context of EKL, an
interactive theorem-proving system under development at Stanford. It
has been used to prove facts about Lisp programs and combinatorial
set theory.
I'll describe some of the features of the language of EKL,
the underlying rewriting system, and the algorithms used for
high-order unification with some examples.
------------------------------
End of AIList Digest
********************
∂28-Oct-83 0042 TYSON@SRI-AI.ARPA Using the Imagen Laser Printer
Received: from SRI-AI by SU-AI with TCP/SMTP; 28 Oct 83 00:42:34 PDT
Date: Fri 28 Oct 83 00:40:27-PDT
From: Mabry Tyson <Tyson@SRI-AI.ARPA>
Subject: Using the Imagen Laser Printer
To: csli-folks@SRI-AI.ARPA
CSLI's Imagen laser printer has been installed. We will probably be shaking
some bugs out of it in the next few days so its reliability may not be
perfect for the short run. Also, it is currently connected to SRI-AI by
a 1200 baud dial-up line. Until we get a 9600 baud line, it may be a little
slow on getting output out.
One thing to remember is that this is not a line printer. It was designed
with an idea of about 10,000 pages per month usage. It prints at a maximum
speed of 10 pages per minute. So a 50 page document (even if they each have
only one line printed) will take at least 5 minutes to print.
Another thing to remember is that although the supplies cost per page are
probably comparable to a copy machine, this laser printer also uses
computer resources. It does use a good bit of CPU usage and also puts a
strain on the I/O of the computer when printing. I suggest that you use
a copier for multiple copies of a document produced by the Imagen.
How to specify that you want to use the CSLI Imagen
---------------------------------------------------
SRI-AI has two Imagens in use now. The default one is the one near the
AI Center; people wishing to use the CSLI Imagen will have to specify this
to the system.
The magic incantation to declare that you wish to use the CSLI Imagen is
TAKE <CSLI>CSLI-IMAGEN.CMD
This will do what is necessary to cause you to use the CSLI Imagen. If you
then wish to instead use the Imagen by the AI Center, do
TAKE <CSLI>SRI-IMAGEN.CMD.
These commands will affect all the Imagen-related programs: Imagen, IQ, and DQ.
The easiest way to have this command done every time you login is to add that
one line to your LOGIN.CMD. If you don't have one, just do
COPY <CSLI>CSLI-IMAGEN-LOGIN.CMD LOGIN.CMD
How to print a file on the Imagen
---------------------------------
The Imagen can be used to print specially formatted files containing many
fonts (such as produced by Tex and Scribe) or can be used to print normal
Ascii files. Before being printed (except as noted below), the files
have to be translated into a form containing the information about the
fonts that is understandable by the Imagen. The program on SRI-AI to
do this is called IMAGEN. Basically you give the command
IMAGEN file.ext
to cause it to be printed. Use HELP IMAGEN for more information.
The IMAGEN program allows you to specify which pages to be printed using
the /PAGES=m:n, thereby saving computer and paper costs.
One switch I often use on normal Ascii files is the /MAG: switch to
use a different magnification. I find that /MAG:0.8 results in a more
readable file that requires less pages than the default /MAG:1.0. (Only
some magnifications such as 0.5, 0.6, ..., 1.0, 1.2,... are available.)
A normal Ascii text file can also be printed by copying it directly
into the directory that is used to queue files for printing. To do
this you may use
COPY foo.bar LSR:
These will NOT be printed exactly as though you did IMAGEN foo.bar but
it may be a little cheaper to print files this way. It also is handy
to tell MM to LIST messages to the Imagen by giving the MM commands
SET LIST-DEVICE LSR:yourname.mail
CREATE-INIT
The first of these specifies that you want listed messages to go to
the Imagen. The second causes this command to be remembered in a
MM.INIT file in your directory. Then whenever you want a hardcopy
of a message you can just say LIST <message sequence>. (However
remember that the Imagen output may be seen by anybody.)
How to check on a file being printed on the Imagen
--------------------------------------------------
Running IQ will give you a listing of the queue of files waiting to
be printed on the CSLI Imagen. There will also be a message about
the status of the Imagen. This queue is only updated about every
30 seconds.
You may wish to use the IMAGEN switch /NOTIFY so that you will be
notified (if you are logged-in) when your job is printed.
How to delete a queued file
---------------------------
When you run IQ, you will see that each queued file has a sequence
number. If you wish to delete a file that you queued, use
DQ 75
if the sequence number was 75.
-------
∂28-Oct-83 0218 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #43
Received: from SU-SCORE by SU-AI with TCP/SMTP; 28 Oct 83 02:18:14 PDT
Date: Thursday, October 27, 1983 5:12PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #43
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Friday, 28 Oct 1983 Volume 1 : Issue 43
Today's Topics:
Implementations - User Convenience Vs. Elegance,
Query - Unification
----------------------------------------------------------------------
Date: Wed, 26 Oct 83 13:15:59 EDT
From: Yoav Shoham <Shoham@YALE>
Subject: User Convenience
The latest Digest issues have been concerned with with the role
of (non-logical) side effects in Prolog, and in particular the
use of assert and retract. While it is true that the semantics
of pure LP are attractive in their neatness, few days go by
without my thanking the Lord and other individuals for inventing
``assert'' and ``retract''.
The latest example I came across is the implementation of
``generated lists'' (or ``glists'') in Prolog. Glists are infinite
lists the elements of which are retrieved by ``lazy evaluation''
(I.e. in a demand driven way). Thus a glist contains an initial part
of the list, a function to compute the next value, and a function
to compute the next function (the latter may not be needed in more
straightforward uses of glists).
Here is a Prolog implementation of glists which uses side effects.
I have another version which is pure, but that has the disadvantages
of being extremely inefficient, and also wrong (in a way that's not
directly relevant).
% GLIST
% -----
next((L,Fn),(L1,Fn1)) :-
Fn=[(L,X2,Fnextval),(L1,X4,Fnextfn)],
Fnextval,
asserta(tmpstoreglist((L1,X4,Fnextfn))),
L1=[X2|L],
Fnextfn,
retract(tmpstoreglist(Copy)),
Fn1=[X4,Copy].
% if we want the list in the database:
next(Name) :-
retract(glist(Name,L1,F1)),
next((L1,F1),(L2,F2)),
assert(glist(Name,L2,F2)).
% an example of a glist : the positive integers
glist(integers,
[],
[(Y1,Y2,(Y1=[] -> Y2=1; Y1=[N|←],Y2 is N+1)),
(Y3,Y7,(Y7=(Y5,Y6,(Y5=[] -> Y6=1; Y5=[N|←],Y6 is N+1))))]).
Does anyone have a pure solution that is correct and efficient ?
-- Yoav Shoham.
------------------------------
Date: Wed 26 Oct 83 10:10:55-PDT
From: Wilkins <Wilkins@SRI-AI>
Subject: User Oriented Features
I object to calling something like rplaca not user-oriented because it
confuses novices. Nearly all programming in a given language is done
by people who are not novices, and features incomprehensible to
novices can be very important and useful. Though it can be important
how people are taught the language when such features exist. In the
LISP example given (using TCONC) the fault almost certainly lies with
the instructor. Everyone I know who teaches LISP does not teach the
use of rplaca, tconc, etc., except possibly after the students are
proficient. LISP without these is an adequate programming language
for a novice and still quite challenging. The proper time to teach
them about rplaca is when they come and complain about having to copy
all this list structure and want to know if there isn't some way to
smash the pointers. If they still have to be drawing boxes and arrows
to understand the basics, they shouldn't be using tconc. I imagine
there may also be useful features in Prolog that would be too hard for
novices to understand but would nevertheless be quite useful to
experienced programmers.
------------------------------
Date: 26 Oct 83 16:59:59 EDT (Wed)
From: Bruce T. Smith <BTS%UNC@CSNet-Relay>
Subject: Unification Query
This is a question for Prolog implementors, I suppose,
or anyone else brave enough to read the source of his
favorite Prolog interpreter:
What kind of unification algorithm does your Prolog
use ? Or, for that matter, does it really do unification ?
(See the comment on p. 224 of Clocksin & Mellish's "Program-
ming in Prolog", or just try giving your Prolog system the
goal
?- X = f(X).
sometime when you've nothing better to do.)
The traditional algorithm (see "Computational logic:
The Unification Algorithm", by J. Robinson, in Machine
Intelligence #6) takes, in the worst case, time exponential
in the length of the terms it's trying to unify. There are
algorithms by Paterson and Wegman, Huet, or Martelli and
Montanari (see "An efficient unification algorithm", by Mar-
telli and Montanari, in ACM ToPLaS, April 1982, for a dis-
cussion of these three) which are better asymptotically.
Is anyone using them ?
-- Bruce Smith,
UNC-Chapel Hill
------------------------------
End of PROLOG Digest
********************
∂28-Oct-83 0810 KJB@SRI-AI.ARPA Alfred Tarski
Received: from SRI-AI by SU-AI with TCP/SMTP; 28 Oct 83 08:10:45 PDT
Date: Fri 28 Oct 83 08:04:18-PDT
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Alfred Tarski
To: csli-folks@SRI-AI.ARPA
cc: keisler@WISC-RSCH.ARPA, kunen@WISC-RSCH.ARPA
I have just learned of the death of Alfred Tarski, the father of
model theoretic semantics. He died Wednesday, at the age of 82.
-------
∂28-Oct-83 1209 GOLUB@SU-SCORE.ARPA KEYS to MJH
Received: from SU-SCORE by SU-AI with TCP/SMTP; 28 Oct 83 12:08:59 PDT
Date: Fri 28 Oct 83 11:56:53-PDT
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: KEYS to MJH
To: su-bboards@SU-SCORE.ARPA
cc: faculty@SU-SCORE.ARPA
We have had a large number of requests for keys to Margaret Jacks Hall. We
understand many persons who are not in the department would like access to the
building on weekends and in the evening. It is the policy of this department to
issue keys only to those persons who are directly associated with the
department. Persons with SCORE accounts are not automatically given keys.
Furthermore no person without a key should be allowed into the
building when it is locked.
GENE GOLUB
-------
-------
∂28-Oct-83 1310 @SU-SCORE.ARPA:MACKINLAY@SUMEX-AIM.ARPA Re: KEYS to MJH
Received: from SU-SCORE by SU-AI with TCP/SMTP; 28 Oct 83 13:10:43 PDT
Received: from SUMEX-AIM.ARPA by SU-SCORE.ARPA with TCP; Fri 28 Oct 83 13:09:38-PDT
Date: Fri 28 Oct 83 12:19:14-PDT
From: Jock Mackinlay <MACKINLAY@SUMEX-AIM.ARPA>
Subject: Re: KEYS to MJH
To: GOLUB@SU-SCORE.ARPA
cc: su-bboards@SU-SCORE.ARPA, faculty@SU-SCORE.ARPA
In-Reply-To: Message from "Gene Golub <GOLUB@SU-SCORE.ARPA>" of Fri 28 Oct 83 12:10:04-PDT
It has been my experience as a manager of a residence hall that the key
to the front door must be changed every few years to make sure that
too many randoms don't have access to the building. We had street
people who had keys to our front door. Rekeying the front door is
quite expensive but if you are interested in security it is the only way.
Jock
-------
∂28-Oct-83 1402 LAWS@SRI-AI.ARPA AIList Digest V1 #84
Received: from SRI-AI by SU-AI with TCP/SMTP; 28 Oct 83 14:00:25 PDT
Date: Friday, October 28, 1983 8:59AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #84
To: AIList@SRI-AI
AIList Digest Friday, 28 Oct 1983 Volume 1 : Issue 84
Today's Topics:
Metaphysics - Split Consciousness,
Halting Problem - Discussion,
Intelligence - Recursion & Parallelism & Consciousness
----------------------------------------------------------------------
Date: 24 Oct 83 20:45:29-PDT (Mon)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: consciousness and the teleporter - (nf)
Article-I.D.: uiucdcs.3417
See also the 17th and final essay by Daniel Dennett in his book Brainstorms
[Bradford Books, 1978]. The essay is called "Where Am I," and investigates
exactly this question of "split consciousness."
------------------------------
Date: Thu 27 Oct 83 23:04:47-MDT
From: Stanley T. Shebs <SHEBS@UTAH-20.ARPA>
Subject: Semi-Summary of Halting Problem Discussion
Now that the discussion on the Halting Problem etc has died down,
I'd like to restate the original question, which seems to have been
misunderstood.
The question is this: consider a learning program, or any program
that is self-modifying in some way. What must I do to prevent it
from getting caught in an infinite loop, or a stack overflow, or
other unpleasantnesses? For an ordinary program, it's no problem
(heh-heh), the programmer just has to be careful, or prove his
program correct, or specify its operations axiomatically, or <insert
favorite software methodology here>. But what about a program
that is changing as it runs? How can *it* know when it's stuck
in a losing situation?
The best answers I saw were along the lines of an operating system
design, where a stuck process can be killed, or pushed to the bottom
of an agenda, or whatever. Workable, but unsatisfactory. In the case
of an infinite loop (that nastiest of possible errors), the program
can only guess that it has created a situation where infinite loops
can happen.
The most obvious alternative is to say that the program needs an "infinite
loop detector". Ted Jardine of Boeing tells a story where, once upon
a time, some company actually tried to do this - write a program that
would detect infinite loops in any other program. Of course, this is
ludicrous; it's a version of the Halting Problem. For loops in a
program under a given length, yes; arbitrary programs, no. So our
self-modifying program can manage only a partial solution, but that's
ok, because it only has to be able to analyze itself and its subprograms.
The question now becomes: can a program of length n detect infinite
loops in any program of length <= n ? I don't know; you can't just
have it simulate itself and watch for duplicated states showing up,
because the extra storage for the inbetween states would cause the
program to grow! and you have violated the initial conditions for the
question. Some sort of static analysis could detect special cases
(like the Life blinkers mentioned by somebody), but I doubt that
all cases could be done this way. Any theory types out there with
the answer?
Anyway, I *don't* think these are vacuous problems; I encountered them
when working on a learning capability for my parser, and "solved" them
by being very careful about rules that expanded the sentence, rather
than reducing (really just context-sensitive vs context-free).
Am facing it once again in my new project (a KR language derived from
RLL), and this time there's no way to sidestep! Any new ideas would
be greatly appreciated.
Stan Shebs
------------------------------
Date: Wed, 26 Oct 1983 16:30 EDT
From: BATALI%MIT-OZ@MIT-MC.ARPA
Subject: Trancendental Recursion
I've just joined this mailing list and I'm wondering about the recent
discussion of "consciousness." While it's an interesting issue, I
wonder how much relevance it has for AI. Thomas Nagel's article "What
is it like to be a bat?" argues that consciousness might never be the
proper subject of scientific inquiry because it is by its nature,
subjective (to the max, as it were) and science can deal with only
objective (or at least public) things.
Whatever the merits of this argument, it seems that a more profitable
object of our immediate quest might be intelligence. Now it may be the
case that the two are the same thing -- or it may be that consciousness
is just "what it is like" to be an intelligent system. On the other
hand, much of our "unconscious" or "subconscious" reasoning is very
intelligent. Consider the number of moves that a chess master doesn't
even consider -- they are rejected even before being brought to
consciousness. Yet the action of rejecting them is a very intelligent
thing to do. Certainly someone who didn't reject those moves would have
to waste time considering them and would be a worse (less intelligent?)
chess player. Conversly it seems reasonable to suppose that one cannot
be conscious unless intelligent.
"Intelligent" like "strong" is a dispositional term, which is to say it
indicates what an agent thus described might do or tend to do or be able
to do in certain situations. Whereas it is difficult to give a sharp
boundary between the intelligent and the non-intelligent, it is often
possible to say which of two possible actions would be the more
intelligent.
In most cases, it is possible to argue WHY the action is the more
intelligent. The argument will typically mention the goals of the
agent, its abilities, and its knowldge about the world. So it seems
that there is a fairly simple and common understanding of how the term
is applied: An action is intelligent just in case it well satisfies
some goals of the agent, given what the agent knows about the world. An
agent is intelligent just in case it performs actions that are
intelligent for it to perform.
A potential problem with this is that the proposed account requires that
the agent often be able to figure out some very difficult things on the
way to generating an intelligent action: Which goal should I satisfy?
What is the case in the world? Should I try to figure out a better
solution? Each of these subproblems, constituitive of intelligence,
seems to require intelligence.
But there is a way out, and it might bring us back to the issue of
consciousness. If the intelligent system is a program, there is no
problem with its applying itself recursively to its subproblems. So the
subproblems can also be solved intelligently. For this to work, though,
the program must understand itself and understand when and how to apply
itself to its subproblems. So at least some introspective ability seems
like it would be important for intelligence, and the better the system
was at introspective activities, the more intelligent it would be. The
recent theses of Doyle and Smith seem to indicate that a system could be
COMPLETELY introspective in the sense that all aspects of its operation
could be accessible and modifiable by the program itself.
But I don't know if it would be conscious or not.
------------------------------
Date: 26 Oct 1983 1537-PDT
From: Jay <JAY@USC-ECLC>
Subject: Re: Parallelism and Conciousness
Anything that can be done in parallel can be done sequentially.
Parallel computations can be faster, and can be easier to
understand/write. So if conciousness can be programmed, and if it is
as complex as it seems, then perhaps parallelism should be exploited.
No algorithm is inherently parallel.
j'
------------------------------
Date: Thu 27 Oct 83 14:01:59-EDT
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Parallelism & Consciousness
From: BUCKLEY@MIT-OZ
Subject: Parallelism and Consciousness
-- of what relevance is the issue of time-behavior of an algorithm to
the phenomenon of intelligence, i.e., can there be in principle such a
beast as a slow, super-intelligent program?
gracious, isn't this a bit chauvinistic? suppose that ai is eventually
successful in creating machine intelligence, consciousness, etc. on
nano-second speed machines of the future: we poor humans, operating
only at rates measured in seconds and above, will seem incredibly slow
to them. will they engage in debate about the relevance of our time-
behavior to our intelligence? if there cannot in principle be such a
thing as a slow, super-intelligent program, how can they avoid concluding
that we are not intelligent?
-=*=- rick
------------------------------
Mail-From: DUGHOF created at 27-Oct-83 14:14:27
Date: Thu 27 Oct 83 14:14:27-EDT
From: DUGHOF@MIT-OZ
Subject: Re: Parallelism & Consciousness
To: RICKL@MIT-OZ
In-Reply-To: Message from "RICKL@MIT-OZ" of Thu 27 Oct 83 14:04:28-EDT
About slow intelligence -- there is one and only one reason to have
intelligence, and that is to survive. That is where intelligence
came from, and that is what it is for. It will do no good to have
a "slow, super-intelligent program", for that is a contradiction in
terms. Intelligence has to be fast enough to keep up with the
world in real time. If the superintelligent AI program is kept in
some sort of shielded place so that its real-time environment is
essentially benevolent, then it will develop a different kind of
intelligence from one that has to operate under higher pressures,
in a faster-changing world. Everybody has had the experience of
wishing they'd made some clever retort to someone, but thinking of
it too late. Well, if you always thought of those clever remarks
on the spot, you'd be smarter than you are. If things that take
time (chess moves, writing good articles, developing good ideas)
took less time, then I'd be smarter. Intelligence and the passage
of time are not unrelated. You can slow your processor down and
then claim that your program's intelligence is unaffected, even
if it's running the same program. The world is marching ahead at
the same speed, and "pure, isolated intelligence" doesn't exist.
------------------------------
Date: Thu 27 Oct 83 14:57:18-EDT
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Re: Parallelism & Consciousness
From: DUGHOF@MIT-OZ
Subject: Re: Parallelism & Consciousness
In-Reply-To: Message from "RICKL@MIT-OZ" of Thu 27 Oct 83 14:04:28-EDT
About slow intelligence -- there is one and only one reason to have
intelligence, and that is to survive.... It will do no good to have
a "slow, super-intelligent program", for that is a contradiction in
terms. Intelligence has to be fast enough to keep up with the
world in real time.
are you claiming that if we someday develop super-fast super-intelligent
machines, then we will no longer be intelligent? this seems implicit in
your argument, and seems itself to be a contradiction in terms: we *were*
intelligent until something faster came along, and then after that we
weren't.
or if this isn't strong enough for you -- you seem to want intel-
ligence to depend critically on survival -- imagine that the super-fast
super-intelligent computers have a robot interface, are malevolent,
and hunt us humans to extinction in virtue of their superior speed
& reflexes. does the fact that we do not survive mean that we are not
intelligent? or does it mean that we are intelligent now, but could
suddenly become un-intelligent without we ourselves changing (in virtue
of the world around us changing)?
doubtless survival is important to the evolution of intelligence, & that
point is not really under debate. however, to say that whether something is
or is not intelligent is a property dependent on the relative speed of the
creatures sharing your world seems to make us un-intelligent as machines
and programs get better, and amoebas intelligent as long as they were
the fastest survivable thing around.
-=*=- rick
------------------------------
Date: Thu, 27 Oct 1983 15:26 EDT
From: STRAZ%MIT-OZ@MIT-MC.ARPA
Subject: Parallelism & Consciousness
Hofstadter:
About slow intelligence -- there is one and only one [...]
Lathrop:
doubtless survival is important to the evolution of intelligence, &
that point is not really under debate.
Me:
No, survival is not the point. It is for the organic forms that
evolved with little help from outside intelligences, but a computer
that exhibits a "slow, super-intelligence" in the protective
custody of humans can solve problems that humans might never
be able to solve (due to short attention span, lack of short-term
memory, tedium, etc.)
For example, a problem like where to best put another bridge/tunnel
in Boston is a painfully difficult thing to think about, but if
a computer comes up with a good answer (with explanatory justifications)
after thinking for a month, it would have fulfilled anyone's
definition of slow, superior intelligence.
------------------------------
Date: Thu, 27 Oct 1983 23:35 EDT
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Parallelism & Consciousness
That's what you get for trying to define things too much.
------------------------------
End of AIList Digest
********************
∂29-Oct-83 0201 @SRI-AI.ARPA:Bush@SRI-KL.ARPA Dennis Klatt seminar
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Oct 83 02:01:21 PDT
Received: from SRI-KL.ARPA by SRI-AI.ARPA with TCP; Sat 29 Oct 83 01:59:58-PDT
Date: Fri 28 Oct 83 17:53:51-PDT
From: Marcia Bush <Bush at SRI-KL>
Subject: Dennis Klatt seminar
To: csli-friends at SRI-AI
********************** SEMINAR ANNOUNCEMENT ***********************
Speaker: Dennis Klatt
Massachusetts Institute of Technology
Topic: Rules for Deriving Segmental Durations in
American English Sentences
Date: Monday, November 7
Time: 11:00 a.m.
Place: Fairchild Laboratory for Artificial Intelligence
Research (visitors call ext. 4282 from lobby for
an escort)
Abstract:
Rules for the derivation of segmental durations appropriate for
English sentences are presently incluede in Dectalk. The nature of
these rules, and how they were derived from by examination of a
moderate corpus of text will be described.
********************************************************************
Note: Dectalk is a speech-synthesis-by-rule program offered as an
option on certain DEC terminals.
-------
∂29-Oct-83 1049 CLT SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
To: "@DIS.DIS[1,CLT]"@SU-AI
SPEAKER: S. Feferman
TITLE: An introduction to "Reverse Mathematics" - continued
TIME: Wednesday, November 2, 4:15-5:30 PM
PLACE: Stanford Mathematics Dept. Faculty Lounge (383-N)
The talk will continue the survey, begun last week, of work by Friedman,
Simpson and others. This work provides sharp information in the form of
equivalences as to which set-existence axioms are needed to prove various
statements in analysis and algebra.
Coming Events:
November 9, Jose Meseguer, SRI - COMPUTABILITY OF ABSTRACT DATA TYPES
∂29-Oct-83 1059 @SRI-AI.ARPA:CLT@SU-AI SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Oct 83 10:53:45 PDT
Received: from SU-AI.ARPA by SRI-AI.ARPA with TCP; Sat 29 Oct 83 10:53:27-PDT
Date: 29 Oct 83 1049 PDT
From: Carolyn Talcott <CLT@SU-AI>
Subject: SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
To: "@DIS.DIS[1,CLT]"@SU-AI
SPEAKER: S. Feferman
TITLE: An introduction to "Reverse Mathematics" - continued
TIME: Wednesday, November 2, 4:15-5:30 PM
PLACE: Stanford Mathematics Dept. Faculty Lounge (383-N)
The talk will continue the survey, begun last week, of work by Friedman,
Simpson and others. This work provides sharp information in the form of
equivalences as to which set-existence axioms are needed to prove various
statements in analysis and algebra.
Coming Events:
November 9, Jose Meseguer, SRI - COMPUTABILITY OF ABSTRACT DATA TYPES
∂30-Oct-83 1142 ALMOG@SRI-AI.ARPA reminder on why context wont go away
Received: from SRI-AI by SU-AI with TCP/SMTP; 30 Oct 83 11:41:54 PST
Date: 30 Oct 1983 1137-PST
From: Almog at SRI-AI
Subject: reminder on why context wont go away
To: csli-friends at SRI-AI
cc: almog
On Tuesday 11.1.83 we have our fifth meeting. The speaker will
be Howard Wettstein, University of Notre Dame. Next week on
8.11.83, the speaker will be Stan Peters. Attached is an abstract
of Wettstein's talk.
HOW TO BRIDGE THE GAP BETWEEN MEANING AND REFERENCE
H.Wettstein, Tuesday,3.15 Ventura Hall
Direct reference theorists, opponents of Frege's sense-reference picture
of the connection between language and reality, are divided on the question
of the precise mechanism of such connection. In this paper I restrict my
attention to indexical expressions and argue against both the causal theory
of reference and Donnellan's idea that reference is determined by
the speaker's intentions, and in favor of a more socially oriented view.
Reference is determined by the cues that are available to the competent
addressee.
-------
∂30-Oct-83 1241 KJB@SRI-AI.ARPA Visit by Glynn Winskel
Received: from SRI-AI by SU-AI with TCP/SMTP; 30 Oct 83 12:41:04 PST
Date: Sun 30 Oct 83 12:36:47-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Visit by Glynn Winskel
To: csli-folks@SRI-AI.ARPA
Dear all,
CSLI is sponsoring a series of lectures by Glynn Winskel, from
Nov 3 to 11. A full program will be distriubted in a day or two. I just
wanted to tell you that there are two reasons for this visit. One is that
he is reputed to be one of the brightest young people in the area of
computer language semantics, and a good speaker, so we should be able
to learn a lot from him. The second is that he is now at CMU and is going
to Edinburgh in January, so if he finds this an exciting place, it will
help us attract people from the two centers of work in this area. Thus,
I hope that each of you with any interest in the semantics of computer
languages will make an opportunity to talk with him. He will be using
Brian's office here at Ventura (497-1710).
I will organize a dinner following his colloquium on Thursday. All are
welcome.
If you want to make an appointment with him, you could either call
the above number (Sandy will be acting as his secretary that week) or
send him a message (glynn.winskel@cmua) in advance.
Thanks, JOn
-------
∂30-Oct-83 1730 @SRI-AI.ARPA:BrianSmith.pa@PARC-MAXC.ARPA Request
Received: from SRI-AI by SU-AI with TCP/SMTP; 30 Oct 83 17:30:48 PST
Received: from PARC-MAXC.ARPA by SRI-AI.ARPA with TCP; Sun 30 Oct 83 17:31:42-PST
Date: 30 Oct 83 17:29 PDT
From: BrianSmith.pa@PARC-MAXC.ARPA
Subject: Request
To: CSLI-Requests@SRI-AI.ARPA
cc: KJB@SRI-AI.ARPA, DKanerva@SRI-AI.ARPA, CSLI-Principals@SRI-AI.ARPA,
BrianSmith.pa@PARC-MAXC.ARPA
Please add both Jon Barwise (KJB@SRI-AI) and Dianne Kanerva
(DKanerva@SRI-AI) to ALL of the area mailing lists (CSLI-A1, CSLI-A2,
...). Both Jon and Dianne would like to be in touch with what is going
on -- Dianne in part as a way of noticing material that should be
included in the weekly newsletter.
Many thanks.
Brian
∂30-Oct-83 2310 GOLUB@SU-SCORE.ARPA Position at ONR-London
Received: from SU-SCORE by SU-AI with TCP/SMTP; 30 Oct 83 23:10:33 PST
Date: Sun 30 Oct 83 23:09:23-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Position at ONR-London
To: su-bboards@SU-SCORE.ARPA, faculty@SU-SCORE.ARPA
The ONR in London is looking for someone in AI, program verification,
expert systems, software engineering , operating systems or the like---
to spend 15 months or more travelling in Europe and the Middle East
so as to report on developments in computer science. This is a fine
oppurtunity to meet with scientific colleagues, to travel widely,
to serve the CS community in the US ---and the pay and staff benefits are
good, too. For further information, write Dr James Daniel ( Stanford '65)
at the Department of the Navy, ONR ---London, Box 39, FPO, New York 09510.
Gene Golub
-------
∂31-Oct-83 0901 SCHREIBER@SU-SCORE.ARPA Talk today
Received: from SU-SCORE by SU-AI with TCP/SMTP; 31 Oct 83 09:00:57 PST
Date: Mon 31 Oct 83 09:00:10-PST
From: Robert Schreiber <SCHREIBER@SU-SCORE.ARPA>
Subject: Talk today
To: faculty@SU-SCORE.ARPA
In today's NA seminar I will talk about my recent work on the application
of systolic arrays to linear algebraic computation. I will discuss arrays
for computing eigenvalues of symmetric matrices, arrays for updating
matrix factorizations and arrays for solving linear, discrete ill-posed
problems.
The seminar is given in room 380C at 4:15
Rob Schreiber
-------
∂31-Oct-83 1003 HANS@SRI-AI.ARPA Re: Request
Received: from SRI-AI by SU-AI with TCP/SMTP; 31 Oct 83 10:02:54 PST
Date: Mon 31 Oct 83 10:03:48-PST
From: Hans Uszkoreit <Hans@SRI-AI.ARPA>
Subject: Re: Request
To: BrianSmith.pa@PARC-MAXC.ARPA, CSLI-Requests@SRI-AI.ARPA
cc: KJB@SRI-AI.ARPA, DKanerva@SRI-AI.ARPA, CSLI-Principals@SRI-AI.ARPA
In-Reply-To: Message from "BrianSmith.pa@PARC-MAXC.ARPA" of Sun 30 Oct 83 17:29:00-PST
I have added Jon's and Dianne's addresses to all project group mailing
lists. May I give you some comments on the decision to initiate this
(indirect speech act! -- you won't be able to escape the comments unless
you type ↑C or ↑O at this point).
I think the advantages of using the mail traffic of the project groups
to monitor their work have to be weighed against a couple of less
desirable side effects.
Although it would be terribly difficult for Jon to pose as 'big brother'
(he could try today on Halloween), people who don't know him, e.g.
visitors or incoming new researchers might feel uncomfortable, if every
quarter-baked idea that they want to throw at their project group
members, will get this additional attention. The same could be true for
excuses about missed meetings, time changes, etc.
When the groups have all gotten into full operation mode, Jon could be
swamped with nitty-gritty junk such as these time change notices,
reference corrections, copy pick-up announcements, etc.
So much for Jon. Now, what could Dianne filter out from the group mail.
Space-time locations for future meetings? No. Dianne could not trust
announcements unless they are sent to her for publication since the
group members might have rearranged things after talking to each other
at TINLUNCH, just to pick a real life example. Could she use the
research-related content of messages for newsletter announcements? I
think, that the decision to practice a policy of publishing short
reports on group meetings is good (and courageous) because it has the
side effect to put some positive pressure on the respective groups.
Discontinuity will be easily recognizable. However, this open policy
goes about as far as one can go. Even if one decides to send out half-
baked ideas and spontaneous group responses to these to about 140
colleagues (about 50 CSLI people and 90 friends) in the bay area and
many other parts of the country, then one still might not want to
include quarter-baked ideas and chunks of raw dough. On the other hand,
if people bake together, there will be enough of this stuff floating
around to cause stomach cramps in large populations of non-participating
humans. It would be too much work for Dianne to act as the pastry
taster.
Conclusion: this might be one of these cases where it doesn't pay to have
more information but where the additional information would cause only
more work. It would be better to urge the project group leaders (or
some other member of each group) to announce meetings and to give short
reports on their outcome.
For the case that you agree: the programming facilities of EMACS make it
very easy to change the files back.
Hans
-------
∂31-Oct-83 1006 @SU-SCORE.ARPA:Guibas.pa@PARC-MAXC.ARPA Re: Talk today
Received: from SU-SCORE by SU-AI with TCP/SMTP; 31 Oct 83 10:06:17 PST
Received: from PARC-MAXC.ARPA by SU-SCORE.ARPA with TCP; Mon 31 Oct 83 10:04:55-PST
Date: 31 Oct 83 10:00:22 PST
From: Guibas.pa@PARC-MAXC.ARPA
Subject: Re: Talk today
In-reply-to: "SCHREIBER@SU-SCORE.ARPA's message of Mon, 31 Oct 83
09:00:10 PST"
To: Robert Schreiber <SCHREIBER@SU-SCORE.ARPA>
cc: faculty@SU-SCORE.ARPA
Thanks for the invitation. I am interested and will be there.
Leo G.
∂31-Oct-83 1032 RPERRAULT@SRI-AI.ARPA Re: Request
Received: from SRI-AI by SU-AI with TCP/SMTP; 31 Oct 83 10:32:47 PST
Date: Mon 31 Oct 83 10:33:57-PST
From: Ray Perrault <RPERRAULT@SRI-AI.ARPA>
Subject: Re: Request
To: Hans@SRI-AI.ARPA
cc: BrianSmith.pa@PARC-MAXC.ARPA, CSLI-Requests@SRI-AI.ARPA, KJB@SRI-AI.ARPA,
DKanerva@SRI-AI.ARPA, CSLI-Principals@SRI-AI.ARPA, RPERRAULT@SRI-AI.ARPA
In-Reply-To: Message from "Hans Uszkoreit <Hans@SRI-AI.ARPA>" of Mon 31 Oct 83 10:03:56-PST
I think Hans is right about not automatically distributing
mail about group activities to Jon and Dianne. My bet is that
all it would do is encourage people to set up private mailing
lists.
Ray
-------
∂31-Oct-83 1103 KJB@SRI-AI.ARPA Re: Request
Received: from SRI-AI by SU-AI with TCP/SMTP; 31 Oct 83 11:03:52 PST
Date: Mon 31 Oct 83 11:02:16-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Re: Request
To: Hans@SRI-AI.ARPA
cc: BrianSmith.pa@PARC-MAXC.ARPA, CSLI-Requests@SRI-AI.ARPA,
DKanerva@SRI-AI.ARPA, CSLI-Principals@SRI-AI.ARPA
In-Reply-To: Message from "Hans Uszkoreit <Hans@SRI-AI.ARPA>" of Mon 31 Oct 83 10:03:47-PST
Hans, I had not thought of the goup addresses as ways of talking about
the subject matter, throwing out half baked ideas, etc., but it is
a good one, so take me off the list. Jon
-------
∂31-Oct-83 1207 KJB@SRI-AI.ARPA Committee assignments (first pass)
Received: from SRI-AI by SU-AI with TCP/SMTP; 31 Oct 83 12:07:17 PST
Date: Mon 31 Oct 83 12:04:48-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Committee assignments (first pass)
To: csli-folks@SRI-AI.ARPA
Dear CSLI Folks:
The meetings on Monday and Wednesday were quite helpful. We
should do it more often, perhaps. As a result of these meetings, and
responses from you to them, there have been several good ideas for
changes put forward which Betsy and I are pursuing.
Also, as a result of discussions following these meetings, I
have drawn up the following list of tentative committee assignments.
We will not consider these final until for a week or so. However, if
you do not like the assignements you have ended up with here, find
someone to make a trade with, unless you just think you are being
asked to do more (or less!) than your share of work.
The chairperson of each committee and subcommittee is in caps. The
temporal status of the committee is indicated, but the term of an
individual to the committee has not been determined.
Computing (permanent):
Ray PERRAULT, Brian Smith, Stanley Peters, Terry Winograd,
Mabry Tyson
Building committee (permanent):
PETERS, Macken, Moore, Wasow, Kaplan, Bush
Education (permanent): PERRY, Wasow, Kay, McCarthy, Rosenschein,
Course development subcommitee (fall and winter, 83-84): KAY,
Rosenschein, Bresnan, Pollard
Workstation Committee (permanent):
SMITH, Halvorsen, Uszkoreit, Kartunnen, Stickel
There is a question as to whether there should be such a
committee, or whether this should be handled by someone hired in
conjunction with the computing committee.
Approaches to human language seminar (fall 83):
Stanley PETERS, Kris Halvorsen
Approaches to computer languages seminar (fall 83):
Brian SMITH, Fernando Pereira
LISP-course seminar (winter 83-84)
SMITH, des Rivieres
Semantics of Natural Languages Seminar (winter, 83-84):
BARWISE, Stucky
Anaphora Seminar (spring, 84):
BRESNAN, Cohen
Semantics of Computer Languages Seminar (spring, 84): BARWISE,
desRivieres
Computer Wizards Committee (83-84):
USZKOREIT, Withgott, Tyson, desRivieres
(for help with using the computers, especially the new ones we
expect)
Colloquium (permanent):
SAG, Pullum (Inner)
ETCHEMENDY, Hobbs (Outer)
Postdoc Committee (permanent)
MOORE, Wasow, Barwise, Stucky
Workshop Committees:
PERRY, Almog (Kaplan workshop)
PEREIRA, Konolige, Smith (ML workshop)
PERRAULT, Kay, Appelt (COLING)
KARTUNNEN, Bush (Morhosyntax and Lexical Morphology)
KIPARSKY, Withgott (Lexical Phonology)
GROSZ, Sag, Ford, Shieber (long range planning)
Outreach Committee (permanent):
BRESNAN, Smith, Pereira (e.g. think about Bell, MIT and
Edinburgh connections)
TINLunch (permanent):
Rosenschein, Appelt
Library Connection (83-84):
HOBBS, Perry, Peters
Please let me know if you have a violent reaction to your role in the
above. I know it seems like an overwhelming number of committees, so
would welcome suggestions for ways of eliminating the work needing to
be done.
Jon
-------
∂31-Oct-83 1445 LAWS@SRI-AI.ARPA AIList Digest V1 #85
Received: from SRI-AI by SU-AI with TCP/SMTP; 31 Oct 83 14:44:29 PST
Date: Monday, October 31, 1983 9:18AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #85
To: AIList@SRI-AI
AIList Digest Monday, 31 Oct 1983 Volume 1 : Issue 85
Today's Topics:
Intelligence
----------------------------------------------------------------------
Date: Fri 28 Oct 83 13:43:21-EDT
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Re: Parallelism & Consciousness
From: MINSKY@MIT-OZ
That's what you get for trying to define things too much.
what do i get for trying to define what too much??
though obviously, even asking that question is trying to define
your intent too much, & i'll only get more of whatever i got for
whatever it was i got it for.
-=*=-
------------------------------
Date: 28 Oct 1983 12:02-PDT
From: ISAACSON@USC-ISI
Subject: Re: Parallelism & Consciousness
From Minsky:
That's what you get for trying to define things too much.
Coming, as it does, out of the blue, your comment appears to
negate the merits of this discussion. The net effect might
simply be to bring it to a halt. I think that it is, inadvertent
though it might be, unkind to the discussants, and unfair to the
rest of us who are listening in.
I agree. The level of confusion is not insignificant and
immediate insights are not around the corner. However, in my
opinion, we do need serious discussion of these issues. I.e.,
questions of subcognition vs. cognition; parallelism,
"autonomy", and epiphenomena; algorithmic programability vs.
autonomy at the subcognitive and cognitive levels; etc. etc.
Perhaps it would be helpful if you give us your views on some of
these issues, including your views on a good methodology to
discussing them.
-- JDI
------------------------------
Date: 30 Oct 83 13:27:11 EST (Sun)
From: Don Perlis <perlis%umcp-cs@CSNet-Relay>
Subject: Re: Parallelism & Consciousness
From: BUCKLEY@MIT-OZ
-- of what relevance is the issue of time-behavior of an
algorithm to the phenomenon of intelligence, i.e., can
there be in principle such a beast as a slow,
super-intelligent program?
From: RICKL%MIT-OZ@mit-mc
gracious, isn't this a bit chauvinistic? suppose that ai is
eventually successful in creating machine intelligence,
consciousness, etc. on nano-second speed machines of the
future: we poor humans, operating only at rates measured in
seconds and above, will seem incredibly slow to them. will
they engage in debate about the relevance of our time- behavior
to our intelligence? if there cannot in principle be such a
thing as a slow, super-intelligent program, how can they avoid
concluding that we are not intelligent? -=*=- rick
It seems to me that the issue isn't the 'appearance' of intelligence of
one being to another--after all, a very slow thinker may nonetheless
think very effectively and solve a problem the rest of us get nowhere
with. Rather I suggest that intelligence be regarded as effectiveness,
namely, as coping with the environment. Then real-time issues clearly
are significant.
A supposedly brilliant algorithm that 'in principle' could decide what
to do about an impending disaster, but which is destroyed by that
disaster long before it manages to grasp that there is a disaster,or
what its dimensions are, perhaps should not be called intelligent (at
least on the basis of *that* event). And if all its potential behavior
is of this sort, so that it never really gets anything settled, then it
could be looked at as really out of touch with any grasp of things,
hence not intelligent.
Now this can be looked at in numerous contexts; if for instance it is
applied to the internal ruminations of the agent, eg as it tries to
settle Fermat's Last Theorem, and if it still can't keep up with its
own physiology, ie, its ideas form and pass by faster than its
'reasoning mechanisms' can keep track of, then it there too will fail,
and I doubt we would want to say it 'really' was bright. It can't even
be said to be trying to settle Fermat's Last theorem, for it will not
be able to keep that in mind.
This is in a sense an internal issue, not one of relative speed to the
environment. But considering that the internal and external events are
all part of the same physical world, I don't see a significant
difference. If the agent *can* keep track of its own thinking, and
thereby stick to the task, and eventually settle the theorem, I think
we would call it bright indeed, at least in that domain, although
perhaps a moron in other matters (not even able to formulate questions
about them).
------------------------------
Date: Sun 30 Oct 83 16:59:12-EST
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Re: Parallelism & Consciousness
[...]
From: Don Perlis <perlis%umcp-cs@CSNet-Relay>
It seems to me that the issue isn't the 'appearance' of intelligence of
one being to another....Rather I suggest that intelligence be regarded
as effectiveness, namely, as coping with the environment....
From this & other recent traffic on the net, the question we are really
discussing seems to be: ``can an entity be said to be intelligent in and
of itself, or can an entity only be said to be intelligent relative to some
world?''. I don't think I believe in "pure, abstract intelligence, divorced
from the world". However, a consequence of the second position seems to
be that there should be possible worlds in which we would consider humans
to be un-intelligent, and I can't readily think of any (can anyone else?).
Leaving that question as too hard (at least for now), another question we
have been chasing around is: ``can intelligence be regarded as survivability,
(or more generally as coping with an external environment)?''. In the strong
form this position equates the two, and this position seems to be too
strong. Amoebas cope quite well and have survived for unimaginably longer
than we humans, but are generally acknowledged to be un-intelligent (if
anyone cares to dispute this, please do). Survivability and coping with
the environment, alone, therefore fail to adequately capture our intuitions
of intelligence.
-=*=- rick
------------------------------
Date: 30 Oct 1983 18:46:48 EST (Sunday)
From: Dave Mankins <dm@BBN-UNIX>
Subject: Re: Intelligence and Competition
By the survivability/adaptability criteria the cockroach must be
one of the most intelligent species on earth. There's obviously
something wrong with those criteria.
------------------------------
Date: Fri 28 Oct 83 14:19:36-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Definition of Intelligence
I like the idea that the intelligence of an organism should be
measured relative to its goals (which usually include survival, but
not in the case of "smart" bombs and kamikaze pilots). I don't think
that goal-satisfaction criteria can be used to establish the "relative
intelligence" of organisms with very different goals. Can a fruit fly
be more intelligent than I am, no matter how well it satisfies its
goals? Can a rock be intelligent if its goals are sufficiently
limited?
To illustrate this in another domain, let us consider "strength". A
large bulldozer is stronger than a small one because it can apply more
brute force to any job that a bulldozer is expected to do. Can we
say, though, that a bulldozer is "stronger" than a pile driver, or
vice versa?
Put another way: If scissors > paper > rock > scissors ..., does it
make any sense to ask which is "best"? I think that this is the
problem we run into when we try to define intelligence in terms of
goals. This is not to say that we can define it to be independent of
goals, but goal satisfaction is not sufficient.
Instead, I would define intelligence in terms of adaptability or
learning capability in the pursuit of goals. An organism with hard-
wired responses to its environment (e.g. a rock, a fruit fly, MACSYMA)
is not intelligent because it does not adapt. I, on the other hand,
can be considered intelligent even if I do not achieve my goals as
long as I adapt to my environment and learn from it in ways that would
normally enhance my chances of success.
Whether speed of response must be included as a measure of
intelligence depends on the goal, but I would say that, in general,
rapid adaptation does indicate greater intelligence than the same
response produced slowly. Multiple choice aptitude tests, however,
exercise such limited mental capabilities that a score of correct
answers per minute is more a test of current knowledge than of ability
to learn and adapt within the testing period. Knowledge relative to
age (IQ) is a useful measure of learning ability and thus of
intelligence, but cannot be used for comparing different species. I
prefer unlimited-time "power" tests for measuring both competence and
intelligence.
The Turing test imposes a single goal on two organisms, namely the
goal of convincing an observer at the other end of tty that he/it is
the true human. This will clearly only work for organisms capable
of typing at human speed and capable of accepting such a goal. These
conditions imply that the organism must have a knowledge of human
psychology and capabilities, or at least a belief (probably incorrect)
that it can "fake" them. Given such a restricted situation, the
nonhuman organism is to be judged intelligent if it can appropriately
modify its own behavior in response to questioning at least as well as
the human can. (I would claim that a nonadapting organism hasn't a
chance of passing the test, and that this is just what the observer
will be looking for.)
I do not believe that a single test can be devised which can determine
the relative intelligences of arbitrary organisms, but the public
wants such a test. What shall we give them? I would suggest the
following procedure:
For two candidate organisms, determine a goal that both are capable
of accepting and that we consider related to intelligence. For an
interesting test, the goal must be such that neither organism is
specially adapted or maladapted for achieving it. The goal might be
absolute (e.g., learn 100 nonsense syllables) or relative (e.g.,
double your vocabulary). If no such goal can be found, the relative
organisms cannot be ranked. If a goal is found, we can rank them
along the dimension of the indicated behavior and we can infer a
similar ranking for related behaviors (e.g., verbal ability). The
actual testing for learning ability is relatively simple.
How can we test a computer for intelligence? Unfortunately, a computer
can be given a wide variety of sensors and effectors and can be made
to accept almost any goal. We must test it for human-level adaptability
in using all of these. If it cannot equal human ability nearly all
measurable scales (e.g., game playing, verbal ability, numerical
ability, learning new perceptual and motor skills, etc.), it cannot
be considered intelligent in the human sense. I know that this is
exceedingly strict, but it is the same test that I would apply to
decide whether a child, idiot savant, or other person were intelligent.
On the other hand, if I could not match the computer's numerical and
memory capabilities, it has the right to judge me unintelligent by
computer standards.
The intelligence of a particular computer program, however, should
be judged by much less stringent standards. I do not expect a
symbolic algebra program to learn to whistle Dixie. If it can
learn, without being programmed, a new form of integral faster
than I can, or if it can find a better solution than I can in
any length of time, then I will consider it an intelligent symbolic
algebra program. Similar criteria apply to any other AI program.
I have left open the question of how to measure adaptability,
relative importance of differing goals, parallel satisfaction of
multiple goals, etc. I have also not discussed creativity, which
involves autonomous creation of new goals. Have I missed anything,
though, in the basic concept of intelligence?
-- Ken Laws
------------------------------
Date: 30 Oct 1983 1456-PST
From: Jay <JAY@USC-ECLC>
Subject: Re: Parallelism & Consciousness
From: RICKL%MIT-OZ@MIT-MC.ARPA
...
the question we are really discussing seems to be: ``can an entity be
said to be intelligent in and of itself, or can an entity only be said
to be intelligent relative to some world?''. I don't think I believe
in "pure, abstract intelligence, divorced from the world".
...
another question we have been chasing around is: ``can intelligence be
regarded as survivability, (or more generally as coping with an
external environment)?''. [...]
I believe intelligence to be the ability to cope with CHANGES in the
enviroment. Take desert tortoises, although they are quite young
compared to amobea, they have been living in the desert some
thousands, if not millions of years. Does this mean they are
intelligent? NO! put a freeway through their desert and the tortoises
are soon dying. Increase the rainfall and they may become unable to
compete with the rabbits (which will take full advantage of the
increase in vegitation and produce an increase in rabbit-ation). The
ability to cope with a CHANGE in the enviroment marks intellignece.
All a tortoise need do is not cross a freeway, or kill baby rabbits,
and then they could begin to claim intellignce. A similar argument
could be made against intelligent amobea.
A posible problem with this view is that biospheres can be counted
intelligent, in the desert an increase in rainfall is handled by an
increase in vegetation, and then in herbivores (rabbits) and then an
increase in carnivores (coyotes). The end result is not the end of a
biosphere, but the change of a biosphere. The biosphere has
successfully coped with a change in its environment. Even more
ludicrous, an argument could be made for an intelligent planet, or
solar system, or even galaxy.
Notice, an organism that does not change when its environment
changes, perhaps because it does not need to, has not shown
intelligence. This is, of course, not to say that that particular
organism is un-intelligent. Were the world to become unable to
produce rainbows, people would change little, if at all.
My behavioralism is showing,
j'
------------------------------
Date: Sun, 30 Oct 1983 18:11 EST
From: JBA%MIT-OZ@MIT-MC.ARPA
Subject: Parallelism & Consciousness
From: RICKL%MIT-OZ at MIT-MC.ARPA
However, a consequence of the second position seems to
be that there should be possible worlds in which we would consider humans
to be un-intelligent, and I can't readily think of any (can anyone else?).
Read the Heinlein novel entitled (I think) "Have Spacesuit, Will
Travel." Somewhere in there a race tries to get permission to
kill humans wantonly, arguing that they're basically stupid. Of
course, a couple of adolscent humans who happen to be in the neighborhood
save the day by proving that they're smart. (I read this thing a long
time ago, so I may have the story and/or title a little wrong.)
Jonathan
[Another story involves huge alien "energy beings" taking over the earth.
They destroy all human power sources, but allow the humans to live as
"cockroaches" in their energy cities. One human manages to convince an
alien that he is intelligent, so the aliens immediately begin a purge.
Who wants intelligent cockroaches? -- KIL]
------------------------------
Date: Sun 30 Oct 83 15:41:18-PST
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Intelligence and Competition
From: RICKL%MIT-OZ@MIT-MC.ARPA
I don't think I believe in "pure, abstract intelligence, divorced
from the world". However, a consequence of the second position seems to
be that there should be possible worlds in which we would consider humans
to be un-intelligent, and I can't readily think of any (can anyone else?).
From: Jay <JAY@USC-ECLC>
...Take desert tortoises, [...]
Combining these two comments, I came up with this:
...Take American indians, although they are quite young compared
to amoeba, they have been living in the desert some thousands of years.
Does this mean they are intelligent? NO! Put a freeway (or some barbed
wire) through their desert and they are soon dying. Increase cultural
competition and they may be unable to compete with the white man (which
will take full advantage of their lack of guns and produce an
increase in white-ation). The ability to cope with CHANGE in the
environment marks intelligence.
I think that the stress on "adaptability" makes for some rather strange
candidates for intelligence. The indians were developing a cooperative
relationship with their environment, rather than a competitive one; I cannot
help but think that our cultural stress on competition has biased us
towards competitive definitions of intelligence.
Survivability has many facets, and competition is only one of them, and
may not even be a very large one. Perhaps before one judges intelligence on
how systems cope with change, how about intelligence with how the systems
cope with stasis? While it is popular to think about how the great thinkers
of the past arose out of great trials, I think that more of modern knowledge
came from times of relative calm, when there was enough surplus to offer
a group of thinkers time to ponder.
David
------------------------------
End of AIList Digest
********************
∂31-Oct-83 1951 LAWS@SRI-AI.ARPA AIList Digest V1 #86
Received: from SRI-AI by SU-AI with TCP/SMTP; 31 Oct 83 19:50:24 PST
Date: Monday, October 31, 1983 9:53AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #86
To: AIList@SRI-AI
AIList Digest Monday, 31 Oct 1983 Volume 1 : Issue 86
Today's Topics:
Complexity Measures - Request,
Obituary - Alfred Tarski,
Seminars - Request for Synopses,
Discourse Analysis - Representation,
Review - JSL Review of GEB,
Games - ACM Chess Results,
Software Verification - VERUS System Offered,
Conferences - FGCS Call for Papers
----------------------------------------------------------------------
Date: 24 October 1983 17:02 EDT
From: Karl A. Nyberg <KARL @ MIT-MC>
Subject: writing analysis
I am interested in programs that people might know of that give word
distributions, sentence lengths, etc., so as to gauge the complexity of
articles. I'd also like to know if anyone could point me to any models
that specify that complexity in terms of these sorts of measurements.
Let me know if any programs you might know of are particular to any text
formatter, programming language, or operating system. Thanks.
-- Karl --
[Such capabilities are included in recent versions of the Unix
operating system. -- KIL]
------------------------------
Date: Sun 30 Oct 83 16:46:39-CST
From: Lauri Karttunen <Cgs.Lauri@UTEXAS-20.ARPA>
Subject: Alfred Tarski
[Reprinted from the UTexas-20 bboard.]
Alfred Tarski, the father of model-theoretic semantics, died last
Wednesday at the age of 82.
------------------------------
Date: Fri, 28 Oct 83 21:29:41 pdt
From: sokolov%Coral.CC@Berkeley
Subject: Re: talk announcements in net.ai
Ken, I would like to submit this message as a suggestion to the
AIlist readership:
This message concerns the rash of announcements of talks being given
around the country (probably the world, if we include Edinburgh). I am
one of those people that like to know what is going on elsewhere, so I
welcome the announcements. Unfortunately, my appetite is only whetted
by them. Therefore, I would like to suggest that, WHENEVER possible,
summaries of these talks should be submitted to the net. I realize
that this isn't always practical, nevertheless, I would like to
encourage people to submit these talk reviews.
Jeff Sokolov
Program in Cognitive Science
and Department of Psychology
UC Berkeley
sokolov%coral@berkeley
...!ucbvax!ucbcoral:sokolov
------------------------------
Date: 29 Oct 83 1856 PDT
From: David Lowe <DLO@SU-AI>
Subject: Representation of reasoning
I have recently written a paper that might be of considerable interest
to the people on this list. It is about a new form of structuring
interactions between many users of an interactive network, based on an
explict representation of debate. Although this is not a typical AI
problem, it is related to much AI work on the representation of language
or reasoning (for example, the representation of a chain of reasoning in
expert systems). The representation I have chosen is based on the work
of the philosopher Stephen Toulmin. I am also sending a version of this
message to HUMAN-NETS, since one goal of the system is to create a
lasting, easily-accessed representation of the interactions which occur
on discussion lists such as HUMAN-NETS or AIList.
A copy of the paper can be accessed by FTP from SAIL (no login required).
The name of the file is PAPER[1,DLO]. You can also send me a message
(DLO @ SAIL) and I'll mail you a copy. If you send me your U.S. mail
address, I'll physically mail you a carefully typeset version. Let
me know if you are interested, and I'll keep you posted about future
developments. The following is an abstract:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
THE REPRESENTATION OF DEBATE AS A BASIS
FOR INFORMATION STORAGE AND RETRIEVAL
By David Lowe
Computer Science Department
Stanford University, Stanford, CA 94305
Abstract
Interactive computer networks offer the potential for creating a body
of information on any given topic which combines the best available
contributions from a large number of users. This paper describes a
system for cooperatively structuring and evaluating information through
well-specified interactions by many users with a common database. A
working version of the system has been implemented and examples of its use
are presented. At the heart of the system is a structured representation
for debate, in which conclusions are explicitly justified or negated by
individual items of evidence. Through debates on the accuracy of
information and on aspects of the structures themselves, a large number of
users can cooperatively rank all available items of information in terms
of significance and relevance to each topic. Individual users can then
choose the depth to which they wish to examine these structures for the
purposes at hand. The function of this debate is not to arrive at
specific conclusions, but rather to collect and order the best available
evidence on each topic. By representing the basic structure of each field
of knowledge, the system would function at one level as an information
retrieval system in which documents are indexed, evaluated and ranked in
the context of each topic of inquiry. At a deeper level, the system would
encode knowledge in the structure of of the debates themselves. This use
of an interactive system for structuring information offers many further
opportunities for improving the accuracy, accessibility, currency,
conciseness, and clarity of information.
------------------------------
Date: 28 Oct 83 19:06:50 EDT (Fri)
From: Bruce T. Smith <bts%unc@CSNet-Relay>
Subject: JSL review of GEB
The most recent issue (Vol. 48, Number 3, September
1983) of the Journal of Symbolic Logic (JSL) has an
interesting review of Hofstadter's book "Godel, Escher,
Bach: an eternal golden braid.". (It's on pages 864-871, a
rather long review for the JSL. It's by Judson C. Webb, a
name unfamiliar to me, amateur that I am.)
This is a pretty favorable review-- I know better than
to start any debates over GEB-- but what I found most
interesting was its emphasis on the LOGIC in the book. Yes,
I know that's not all GEB was about, but it was unusual
to read a discussion of it from this point of view. Just to
let you know what to expect, Webb's major criticism is
Hofstadter's failure, in a book on self-reference, to dis-
cuss Kleene's fixed-point theorem,
which fuses these two phenomena so closely together.
The fixed-point theorem shows (by an adaptation of
Godel's formal diagonalization) that the strangest ima-
ginable conditions on functions have solutions computed
by self-referential machines making essential use of
their own Godel-numbers, provided only that the condi-
tions are expressible by partial recursive functions.
He also points out that Hofstadter didn't show quite how
shocking Godel's theorems were: "In short, Godel discovered
the experimental completeness of a system that seemed almost
too weak to bother with, and the theoretical incompleteness
of one that aimed only at experimental completeness."
Enough. I'm not going to type the whole 7.5 pages. Go
look for the newest issue of the JSL-- probably in your
Mathematics library. For any students out there, membership
in the Association for Symbolic Logic is only $9.00/yr and
includes the JSL. Last year they published around 1000
pages. It's mostly short technical papers, but they claim
they're going to do more expository stuff. The address to
write to is
The Association for Symbolic Logic
P.O.Box 6248
Providence, RI 02940
============================================
Bruce Smith, UNC-Chapel Hill
...!decvax!duke!unc!bts (USENET)
bts.unc@CSnet-Relay (from other NETworks)
------------------------------
Date: 27 October 1983 1130-EDT
From: Hans Berliner at CMU-CS-A
Subject: ACM chess results
[Reprinted from the CMU-C bboard.]
The results of the ACM World computer CHess Championship are:
CRAY BLITZ - 4 1/2 1st place
BEBE - 4 2nd
AWIT - 4 3rd
NUCHESS - 3 1/2 4th
CHAOS - 3 1/2 5th
BELLE - 3 6th
There were lots of others with 3 points. Patsoc finished with a
scoreof 1.5 - 3.5. It did not play any micros and was usually
outgunned by 10 mip mainframes. There was a lot of excitement in the
last 3 rounds. in round 3 NUCHESS defeated Belle (the first time
Belle had lost to a machine). In round 4 Nuchess drew Cray Blitz in
a long struggle when they were both tied for the lead and remained so
at 3 1/2 points after this round. The final round was really wild:
BEBE upset NUCHESS (the first time it had ever beaten Nuchess) just
when NUCHESS looked to have a lock on the tournament. CRAY Blitz won
from Belle when the latter rejected a draw because it had been set to
play for a win at all costs (Belle's only chance, but this setting
was a mistake as CRAY BLITZ also had to win at all costs). In the
end AWIT snuck into 3 rd place in all this commotion, without having
every played any of the contenders. One problem with a Swiss pairing
system used for tournaments where only a few rounds are possible is
that it only brings out a winner. The other scores are very much
dependent on what happens in the last round.
Belle was using a new modification in search technique which based on
the results could be thought of as a mistake. Probably it is not
though, though possiby the implementation was not the best. In any
case Thompson apparently thought he had to do something to improve
Belle for the tournament.
In any case, it was not a lost cause for Thompson. He shared this
years Turing award with Ritchie for developing UNIX, received a
certificate from the US chess federation for the first non-human
chess master (for Belle), and a $16,000 award from the Common Wealth
foundation for the invention award of the year (software) for his
work on UNIX, C, and Belle. Lastly, it is interesting to note that
this is the 4th world championship. They are held 3 years apart, and
no program has won more than one of them.
------------------------------
Date: Mon, 17 Oct 83 10:41:19 CDT
From: wagner@compion-vms
Subject: Announcement: VERUS verification system offered
Use of the VERUS Verification System Offered
--------------------------------------------
VERUS is a software design specification and verification system
produced by Compion Corporation, Urbana, Illinois. VERUS was designed
for speed and ease of use. The VERUS language is an extension of
of the first-order predicate calculus designed for a software
engineering environment. VERUS includes a parser and a theorem prover.
Compion now offers use of VERUS over the MILNET/ARPANET. Use is for a
maximum of 4 weeks. Each user is provided with:
1. A unique sign-on to Compion's VAX 11/750 running VMS
2. A working directory
3. Hard-copy user manuals for the use period.
If you are interested, contact Fran Wagner (wagner@compion-vms).
Note that the new numerical address for compion-vms is 10.2.0.55.
Please send the following information to help us prepare for you
to use VERUS:
your name
organization
U.S. mailing address
telephone number
network address
whether you are on the MILNET or the ARPPANET
whether you are familiar with VMS
whether you have a DEC-supported terminal
desired starting date and length of use
We will notify you when you can log on and send you hard-copy user
documents including a language manual, a user's guide, and a guide
to writing state machine specifications.
After the network split, VERUS will be available over the MILNET
and, by special arrangement, over the ARPANET.
←←←←←←←←←←
VERUS is a trademark of Compion Corporation.
DEC, VAX, and VMS are trademarks of Digital Equipment Corporation.
------------------------------
Date: 26 Oct 1983 19:34:39-EDT
From: mac%mit-vax @ MIT-MC
Subject: FGCS Call for Papers
CALL FOR PAPERS
FGCS '84
International Conference on Fifth Generation Computer Systems, 1984
Institute for New Generation Computer Technology
November 6-9, 1984 Tokyo, Japan
The scope of technical sessions of this conference encompasses
the technical aspects of new generation computer systems which
are being explored particularly within the framework of logic
programming and novel architectures. This conference is intended
to promote interaction among researchers in all disciplines re-
lated to fifth generation computer technology. The topics of in-
terest include (but are not limited to) the following:
PROGRAM AREAS
Foundations for Logic Programs
* Formal semantics/pragmatics
* Computation models
* Program analysis and complexity
* Philosophical aspects
* Psychological aspects
Logic Programming Languages/Methodologies
* Parallel/Object-oriented programming languages
* Meta-level inferences/control
* Intelligent programming environments
* Program synthesis/understanding
* Program transformation/verification
Architectures for New Generation Computing
* Inference machines
* Knowledge base machines
* Parallel processing architectures
* VLSI architectures
* Novel human-machine interfaces
Applications of New Generation Computing
* Knowledge representation/acquisition
* Expert systems
* Natural language understanding/machine translation
* Graphics/vision
* Games/simulation
Impacts of New Generation Computing
* Social/cultural
* Educational
* Economic
* Industrial
* International
ORGANIZATION OF THE CONFERENCE
Conference Chairman : Tohru Moto-oka, Univ of Tokyo
Conference Vice-chairman : Kazuhiro Fuchi, ICOT
Program Chairman : Hideo Aiso, Keio Univ
Publicity Chairman : Kinko Yamamoto, JIPDEC
Secretariat : FGCS'84 Secretariat, Institute for New
Generation Computer Technology (ICOT)
Mita Kokusai Bldg. 21F
1-4-28 Mita, Minato-ku, Tokyo 108, Japan
Phone: 03-456-3195 Telex: 32964 ICOT
PAPER SUBMISSION REQUIREMENTS
Four copies of manuscripts should be submitted by April 15, 1984 to :
Prof. Hideo Aiso
Program chairman
ICOT
Mita Kodusai Bldg. 21F
1-4-28 Mita, Minato-ku
Tokyo 108, Japan
Papers are restricted to 20 double-spaced pages (about 5000
words) including figures. Each paper must contain a 200-250 word
abstract. Papers must be written and prensented in English.
Papers will be reviewed by international referees. Authors will
be notified of acceptance by June 30, 1984, and will be given in-
structions for final preparation of their papers at that time.
Camera-ready papers for the proceedings should be sent to the
Program Chairman prior to August 31, 1984.
Intending authors are requested to return the attached reply card
with tentative subjects.
GENERAL INFORMATION
Date : November 6-9, 1984
Venue : Keio Plaza Hotel, Tokyo, Japan
Host : Institute for New Generation Computer Technology
Outline of the Conference Program :
General Sessions
Keynote speeches
Report of research activities on Japan's FGCS Project
Panel discussions
Technical sessions (Parallel sessions)
Presentation by invited speakers
Presentation of submitted papers
Special events
Demonstration of current research results
Technical visit
Official languages :
English/Japanese
Participants: 600
Further information:
Conference information will be available in December, 1983.
**** FGCS PROJECT ****
The Fifth Generation Computer Systems (FGCS) Project, launched in
April, 1982, is planned to span about ten years. It aims at
realizing more user-friendly and intelligent computer systems
which incorporate inference and knowledge base management func-
tions based on innovative computer architecture, and at contri-
buting thereby to future society. The Institute for New Genera-
tion Computer Technology (ICOT) was established as the central
research institute of the project. The ICOT Research Center be-
gan its research activities in June, 1982 with the support of
government, academia and industry.
------------------------------
End of AIList Digest
********************
∂31-Oct-83 1959 GOLUB@SU-SCORE.ARPA meeting
Received: from SU-SCORE by SU-AI with TCP/SMTP; 31 Oct 83 19:59:23 PST
Date: Mon 31 Oct 83 19:58:41-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: meeting
To: CSD-Senior-Faculty: ;
cc: bscott@SU-SCORE.ARPA
There's lots to discuss on Tuesday and I need to leave at 3:45.
Please be prompt. GENE
-------
∂31-Oct-83 2346 BRODER@SU-SCORE.ARPA AFLB remainder
Received: from SU-SCORE by SU-AI with TCP/SMTP; 31 Oct 83 23:46:41 PST
Date: Mon 31 Oct 83 23:45:46-PST
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: AFLB remainder
To: aflb.su@SU-SCORE.ARPA
Stanford-Office: MJH 325, Tel. (415) 497-1787
Don't forget tomorrow's special AFLB meeting in MJH252, at 12:30:
11/1/83 - Prof. Pavol Hell (Simon Fraser University)
"Sorting in Rounds"
-------
∂01-Nov-83 0233 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #44
Received: from SU-SCORE by SU-AI with TCP/SMTP; 1 Nov 83 02:33:05 PST
Date: Monday, October 31, 1983 8:52PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #44
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Tuesday, 1 Nov 1983 Volume 1 : Issue 44
Today's Topics:
Implementations - User Convenience Vs. Elegance
----------------------------------------------------------------------
Date: Sunday, 30-Oct-83 22:59:17-GMT
From: Richard HPS (on ERCC DEC-10) <OKeefe.R.A.@EDXA>
Subject: Reply To My Critics
Russ Abbott raised the question of how pure =.. (UNIV) is,
claiming that it is "in violation of all first order principles".
Not so. As David Warren pointed out in his paper
@InProceedings<HigherOrder,
Key = "Warren",
Author = "Warren, D.H.D.",
Title = "Higher-order extensions to Prolog - are they needed ?",
BookTitle = "Tenth International Machine Intelligence Workshop,
Cleveland, Ohio",
Year = 1981,
Month = "Apr",
Note = "DAI Research Paper 154"
>
suppose we have a logic program P containing predicates
<Pi/Ni; i=1..p> and functions <Fj/Nj; j=1..f> where each
predicate is also listed among the functions, we can form
a new logic program P' by adding axioms
call(Pi(X1,...,XNi)) :- Pi(X1,...,XNi). % i = 1..p
functor(K, K, 0) :- integer(K).
functor(Fj(X1,...,XNj), Fj, Nj). % j = 1..f
arg(k, Fj(X1,...,Xk,...,XNj), Xk). % j = 1..f
% k = 1..Nj
K =.. [K] :- integer(K).
Fj(X1,...,XNj) =.. [Fj,X1,...,XNj]. % j = 1..f
If the Prolog interpreter finds a solution for Goal using the
program P, Goal is a logical consequence of the program P', even
in the presence of cuts, but not of course with assert and retract.
So call, functor, arg, and univ *are* first-order, in a suitably
augmented program. It is perfectly proper to call them first order,
because the augment is just definitions for those predicates, no
other parts of the program being changed. (It is NOT the case that
Prolog working on P will find all the answers that Prolog working
on P' would find. For example, the built-in arg does not backtrack.
But then it isn't the case that Prolog will find all the logical
consequences of a P that doesn't use these predicates, so that is
nothing new.)
You might even accept ancestors(←) and subgoal←of(←) as first-
order. In the transformed program each predicate has an extra
argument: the list of ancestor goals. Although the transformed
program is different from the original, the important things are
that the transformation can be accomplished entirely at compile
time, the result is a pure first-order program which is very
like the original.
Chris Moss criticises "Richard's assumption that the Edinburgh
implementation is normative for such things as setof". I make no
such assumption. The version of setof and bagof I mailed to this
Digest is not in fact "the Edinburgh implementation". My view
is the perfectly commonplace and widely accepted one that
the first person to invent or discover something
has the right to name it,
other people knowing this name do not have the right
to apply it to other things or to use another name
for the same thing without consulting the relevant
professional community
If enough of the Prolog community agree that setof or bagof
is a bad name (and since Chris Moss himself has proposed an
operation called seqof I assume he likes the name) I will go along
with whatever is chosen to replace it. Till then:
1. The definition of setof is available in 5 places:
MI 10, DAI RR 154, the Dec-10 Prolog manual, the C-Prolog
manual, and this Digest.
2. David Warren has priority on the names setof and bagof.
He also has priority on the operation (the one where free
variables in the generator can be bound).
3. The name "findall" for what Chris Moss calls "a flat setof"
is established on page 152 of "Programming in Prolog",
which is the most widely available text on Prolog. I think
Chris Mellish invented the name.
Chris Moss also asks 'which implementation introduced the
word "find←all"? I don't know.' The answer is two-fold. The
first part of the answer is that the proper (according to the
normal practice of scientific and mathematical naming) name
for the operation in question is 'findall', and I occasionally
mis-spell it. The second part of the answer is that when I
mailed an implementation of 'findall' to net.lang.prolog I
called it 'find←all' so that people could try it out whose
Prolog implementation reserved the word 'findall'. This has
reinforced my mis-spelling habit.
The following paragraph in his message is one I whole
heartedly agree with, except to point out that the reasonably
clear descriptions in the 1981 DEC-10 Prolog reference manaul
seem to have done no good, so there is little reason to expect
more formal definitions to help. Alan Mycroft and someone else
(Neil Jones?) have published a formal semantics for Prolog with
the cut but without i/o or database hacking.
Yoav Shoham is thankful for assert and retract. Yes, and
BASIC programmers should be thankful for GOSUB. But Pascal
programmers are even more thankful for real procedures which
are even better suited to the task of writing correct readable
programs. I cannot read the example, but the data base seems
to be used for two different tasks. In 'next'/2 we find
asserta(tmpstoreglist((L1,X2,Fnextfn))),
<<<DANGER>>>
retract(tmpstoreglist(Copy)),
Now all that this does is to bind Copy to a copy of (L1,X2,Fnextfn)
with new variables. The "meta-logical utilities" available as
{SU-SCORE}PS:<Prolog>METUTL.PL contain a predicate
copy(OldVersion, NewVersion) :-
asserta(.(OldVersion)),
retract(.(NewVersion)).
or something like that. Now there are three points to make.
The first is that this is a perfectly clear operation which
in itself has nothing whatsoever to do with the data base.
It could be implemented very easily in C or MACRO-10. The
second point is that it is not at all difficult to implement
it in Prolog, using the var/1 and ==/2 primitives. The third
point is that by not giving the operation a name, and writing
the asserta and retract separately, Shoham has been seduced
into putting some other code in between the parts of this
operation, where I put <<<DANGER>>> above. In the case that
Fnextfn fails, a tmpstoreglist clause will be left in the data
base, which I assume was not intended.
So this is for me a PERFECT example of the UNdesirability
of assert and retract: there is a straightforward operation which
is easy to explain (NewVersion is a renaming of OldVersion where
all thevariables are new) which could have been provided by the
implementor more efficiently than by using the data base, and
where using assert and retract has lead to a program which is
harder to understand and more error-prone. Thank you very much
for this example, Yoav Shoham.
The other use that example makes of assert and retract is to
maintain a global variable in the data base. If that's what you
want, fine. But if you want lazy lists to look like Prolog objects
which you can pass around, share variables with, etc. just like
ordinary lists, then sticking them in the data base is the last
thing you want. Yoav Shoham challenges "Does anyone have a pure
solution that is correct and efficient ?" Since no specification
is presented, I have no idea what "correct" means. I can't take
the specification from the code, because it puts the answer in the
data base, which a pure solution can't do. So here is my first
attempt at lazy lists in Prolog, which I have tested, but only
on two examples.
% File : LAZY.PL
% Author : R.A.O'Keefe
% Updated: 30 October 1983
% Purpose: Lazy lists in Prolog.
% Needs : apply/2 from APPLIC.PL.
% Note: this code is "pure" only in the sense that it has no
% side-effects. It does rely on 'nonvar' and cuts,.
% The lists are a little bit too eager to really be called lazy, as
% if you look at N elements N+1 will be computed instead of N.
% If you backtrack, the computed elements will be undone just like
% other Prolog data structures. "Intelligent backtracking" might
% be a good thing if lazy lists were to be used a lot.
/*
:- type
lazy←list(T) --> list(T)/void(T,T).
:- pred
make←lazy(T, void(T,T), lazy←list(T)),
head←lazy(lazy←list(T), T),
tail←lazy(lazy←list(T), lazy←list(T)),
member←check←lazy(T, lazy←list(T)).
*/
:- public
make←lazy/3,
head←lazy/2,
tail←lazy/2,
member←check←lazy/2.
:- mode
make←lazy(+, +, -),
head←lazy(+, ?),
tail←lazy(+, -),
member←check←lazy(+, +).
% A lazy list is a pair consisting of a normal Prolog list (usually
% ending with an unbound variable) and a goal which may be used to
% generate new elements. The idea is that [X0,X1,X2,...]/R should
% satisfy X0 R X1, X1 R X2, ... These objects should only be used
% as arguments to these predicates.
make←lazy(First, Step, [First|←]/Step).
head←lazy([Head|←]/←, Head).
tail←lazy([←|Tail]/Step, Tail/Step) :-
nonvar(Tail), !. % delete this clause to get logic
tail←lazy([Head|Tail]/Step, Tail/Step) :-
apply(Step, [Head,Next]),
Tail = [Next|←].
member←check←lazy(Thing, LazyList) :-
head←lazy(LazyList, Thing), !.
member←check←lazy(Thing, LazyList) :-
tail←lazy(LazyList, LazyTail),
member←check←lazy(Thing, LazyTail).
end←of←file.
Wilkins says "LISP without these [rplaca and tconc] is an
adequate programming language for a novice and still quite
challenging." Just so. I'd drop "for a novice". Maybe my
example of rplaca and tconc was ill-chosen (though the student
in question had been writing good Lisp for about a year).
But the claim that "Nearly all programming in a given language
is done by people who are not novices" is true only in the sense
that those people have experience. It is not always true that
they have understanding. I have seen a lot of programs written
by experienced people (in Algol, Fortran, PL/I, Pascal, C, and
Lisp) that I would have sworn were written by people who'd never
looked at the manual. I have never met another Fortran programmer
who had bothered to read the standard (I know they exist, I'm just
saying they're rare).
As Prolog stands *now*, there is a definite need for assert
and retract. To deny this would be to condemn myself, as my
programs do from time to time hack the data base. The trouble
is that people who are accustomed to other languages feel lost
without assignment and pounce on assert and retract with loud
cries of joy, **far too early**. Too early when they are learning.
I have seen someone hacking the data base like this:
:- asssert(p(X=47+Y)).
?- Y = 12, p(Eqn), write(Eqn), nl.
and being very surprised when he got "←1 = 47 + ←2". He was
expecting "X = 47 + 12". He had not yet understood that clauses
don't share variables. (Before criticising the instructor: he
had none.) Too early when writing a program. You should always
look for an applicative solution FIRST. Second too. When an
applicative solution exists, it is usually faster than a data
base solution.
Here is an example. Suppose you want to maintain a table
which maps integers 1..N (for N known when the table is created),
and want to be able to change elements of it. Most people seem
to go for
make←table(N, Init, T) :-
gensym(table, T),
forall(between(1, N, K), assert(table(T, K, Init))).
fetch(T, N, Elem) :-
table(T, N, Elem).
store(T, N, Elem) :-
retract(table(T, N, ←)), % bounds check
assertz(table(T, N, Elem)).
first thing. But in most existing Prologs, the cost of a fetch or
store is O(N+|Elem|), with a large constant. If you use binary trees
instead, the cost of a fetch or store is O(lg N) with a small constant
(though you do need a garbage collector). There will of course be
occasions when you need to use the data base, but you will use it more
effectively if you have considered a variety of data structures first.
With regard to assert/a/z, retract, recorda/z, instance, erase,
recorded, all I am saying is "please give me instead operations that I
and compiler writers can understand". I have seen no evidence that
anyone except David Warren, Fernando Pereira, Lawrence Byrd, and Chris
Mellish understands data base hacking in DEC-10 Prolog any better than
I do, which after four years is pretty well but not well enough. I
have found a lot of people who THINK they understand data base hacking
but whose code shows they are wrong, and quite a few people who are
sure they don't understand it and so stick to simple cases and whose
programs in consequence work. What I want to see is a collection of
these simple cases given names and definitions and coded efficiently.
The end of the quest will still be a form of data base hacking. I'm
not all that bothered about assignment, or stepping outside logic.
What I DO object to is being forced to use primitives I don't
understand fully.
Fiat ratiocinatio, fiat ratio autem.
End of PROLOG Digest
********************
∂01-Nov-83 1339 BRODER@SU-SCORE.ARPA Next AFLB talk(s)
Received: from SU-SCORE by SU-AI with TCP/SMTP; 1 Nov 83 13:39:32 PST
Date: Tue 1 Nov 83 13:36:17-PST
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Next AFLB talk(s)
To: aflb.all@SU-SCORE.ARPA
cc: sharon@SU-SCORE.ARPA
Stanford-Office: MJH 325, Tel. (415) 497-1787
N E X T A F L B T A L K (S)
Some AFLB people are very much against right justified abstracts.
Some prefer them. If you feel very strongly one way or the other let
me know, and I'll abide by the majority opinion.
No, definetely there will no two mailing lists!
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
11/3/83 - Prof. J. M. Robson (Australian National University)
"The Complexity of GO and Other Games"
For GO as played in Japan, as for chess and checkers, deciding whether
White can force a win from a given position is an exponential time
complete problem. The Chines rules of GO differ from the Japanes in a
manner which appears minor but invalidates both the upper and the
lower bound parts of the Exptime completeness proof. Making a similar
change to other games results in their decision problem becoming
exponential time complete.
******** Time and place: Nov. 3, 12:30 pm in MJ352 (Bldg. 460) *******
11/10/83 - Prof. Alan L. Selman (Yowa State Univ.)
"From Complexity to Cryptography and Back"
What is secure public-key cryptosystem? Do there exist secure
public-key cryptosystems? Answers to these questions depend on deeper
knowledge about the mathematical structure of NP than one might
anticipate -- and a deeper knowledge than is now available. Some
interesting theorems are proved along the way.
******** Time and place: Nov. 10, 12:30 pm in MJ352 (Bldg. 460) *******
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Regular AFLB meetings are on Thursdays, at 12:30pm, in MJ352 (Bldg.
460).
If you have a topic you would like to talk about in the AFLB seminar
please tell me. (Electronic mail: broder@su-score.arpa, Office: CSD,
Margaret Jacks Hall 325, (415) 497-1787) Contributions are wanted and
welcome. Not all time slots for the autumn quarter have been filled
so far.
For more information about future AFLB meetings and topics you might
want to look at the file [SCORE]<broder>aflb.bboard .
- Andrei Broder
-------
∂01-Nov-83 1415 GOLUB@SU-SCORE.ARPA lunches
Received: from SU-SCORE by SU-AI with TCP/SMTP; 1 Nov 83 14:15:46 PST
Date: Tue 1 Nov 83 14:14:34-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: lunches
To: faculty@SU-SCORE.ARPA
Todays lunch was so pleasant and interesting that it re-affirmed my
desire to maintain the lunches. I hope we can continue in this spirit.
GENE
-------
∂01-Nov-83 1449 RPERRAULT@SRI-AI.ARPA Winskel lectures
Received: from SRI-AI by SU-AI with TCP/SMTP; 1 Nov 83 14:49:23 PST
Date: Tue 1 Nov 83 14:46:44-PST
From: Ray Perrault <RPERRAULT@SRI-AI.ARPA>
Subject: Winskel lectures
To: csli-friends@SRI-AI.ARPA
cc: rperrault@SRI-AI.ARPA, bboard@SU-SCORE.ARPA, bboard@SRI-AI.ARPA
SPECIAL CLSI LECTURE SERIES BY GLYNN WINSKEL
CSLI announces a special series of lectures by Glynn Winskel,
of the CMU Computer Sciences Department, who will be visiting CSLI
from November 3 through 11. During his stay, he will be using room 27
in Ventura Hall (497-1710). His lectures will be as follows:
1. The CSLI Colloquium, 4:15 p.m., Thursday, November 3, Redwood Hall
"The Semantics of a Simple Programming Language"
The operational and denotational semantics of a simple
programming language are presented and used to illustrate
some basic issues in the semantics of programming languages.
I will try to show how the more abstract concepts of denotational
semantics connect with more basic operational ideas. Specifically,
I will define what it means for the semantics to be equivalent
and indicate briefly how to prove them equivalent. I'll explain
what it means for a denotational semantics to be fully abstract
with respect to an operational semantics. Full abstraction
is a useful criterion for the agreement of denotational and
operational semantics; it has been used particularly in murky
areas like the semantics of concurrency where at present there
is no generally accepted model. I'll motivate the basic concepts
of denotational semantics like complete partial orders (cpo's)
and the continuous functions on them.
2. Working Group in Semantics of Computer Languages,
9:30 a.m., Tuesday, November 8, at Xerox PARC.
Come to lower entrance at 9:25.
"The Semantics of Nondeterminism"
The programming language of the first talk will be extended
to admit nondeterminism. Starting from an operational semantics
there will be three natural equivalence relations between programs
based on their possible and inevitable behaviour. Accordingly
when we move over to the denotational semantics there will be
three different power domains with which to give the semantics.
(Power domains are the cpo analogue of powerset and they capture
information about nondeterministic behaviour of a computation,
roughly the set of values it produces.) With the intuitions
secure (hopefully), we'll turn to a more abstract treatment of
power domains and show how they are used to give denotational
semantics to parallelism. In this talk both the operational
and denotational semantics will use the nondeterministic
interleaving (shuffling) of atomic actions to handle parallelism.
3. Approaches to Computer Languages Seminar, 2 p.m., Thursday,
November 10, Redwood Hall.
"The Semantics of Communicating Processes"
This talk is intended as an introduction to the work of Milner
and associates in Edinburgh and Hoare and associates in Oxford
on programming languages and semantics for communicating processes.
Milner's language Calculus of Communicating Systems (CCS) and
Hoare's Communicating Sequential Processes (CSP) are fairly similar.
Both are based on synchronized communication between processes.
4. Special meeting of C1 group, 3:30 p.m., Friday, November 11,
at SRI, conference room EL369. Visitors whould come to the
entrance of Building E at 3:25 p.m.
"Event Structure Semantics of Communicating Processes"
An event structures consists of a set of events related by
causality relations specifying how an event depends for its
occurrence on the previous occurrence of events and how the
occurrence of some events excludes others. Here we focus on
their use to give a semantics to languages like CCS and CSP.
Event structures capture concurrency as causal independency
and so give a noninterleaving model of concurrent (or parallel)
computations. Adopting a notion of morphism appropriate to
synchronizing processes we obtain a category of event structures
with categorical constructions closely related to those
constructions used by Milner and Hoare. We show how relations
between event structures and other models like Petri nets and
some of the interleaving models of Milner and Hoare, are
expressed as adjunctions.
-------
∂01-Nov-83 1615 LIBRARY@SU-SCORE.ARPA Integration the VLSI Journal--recommendations?
Received: from SU-SCORE by SU-AI with TCP/SMTP; 1 Nov 83 16:14:56 PST
Date: Tue 1 Nov 83 16:13:21-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Integration the VLSI Journal--recommendations?
To: su-bboards@SU-SCORE.ARPA
cc: faculty@SU-SCORE.ARPA
Integration the VLSI Journal is a new title from North-Holland which is a
quarterly costing $88 a year. The editorial board includes Shrobe MIT,
Bryant CalTech, Burstein IBM Watson etc. Volume 1 no. 1 April 1983 includes
the following articles: VLSI Physics by Svensson; Hierarchical Channel
Router by Burstein and Pelavin; A very fast multiplication algorithm
for VLSI implemention by Vuillemin; Lambda, an integrated master-slice
LSI CAD system by Goto, Matsuda, Takamizaa, Fujita, Mixumura, and Nakamura;
A function-independent self-test for large programmable logic arrays by
Grassi and Pfleiderer; and Logic gate characterization through ringoscillators
by Wassink and Spaanenburg.
I will place this issue with the current journals for your review. If you
think I should purchase. Let me know.
Harry
-------
∂01-Nov-83 1649 LAWS@SRI-AI.ARPA AIList Digest V1 #87
Received: from SRI-AI by SU-AI with TCP/SMTP; 1 Nov 83 16:48:28 PST
Date: Tuesday, November 1, 1983 9:47AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #87
To: AIList@SRI-AI
AIList Digest Tuesday, 1 Nov 1983 Volume 1 : Issue 87
Today's Topics:
Rational Psychology - Definition,
Parallel Systems,
Conciousness & Intelligence,
Halting Problem,
Molecular Computers
----------------------------------------------------------------------
Date: 29 Oct 83 23:57:36-PDT (Sat)
From: hplabs!hao!csu-cs!denelcor!neal @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: denelcor.182
I see what you are saying and I beg to disagree. I don't believe that
the distinction between rational and irrational psychology (it's probably
not that simple) as depending on whether or not the scientist is being
rational but on whether or not the subject is (or rather which aspect of
his behavior--or mentation if you accept the existence of that--is under
consideration). More like the distinction between organic and inorganic
chemistry.
------------------------------
Date: Mon, 31 Oct 83 10:16:00 PST
From: Philip Kahn <kahn@UCLA-CS>
Subject: Sequential vs. parallel
It was claimed that "parallel computation can always
be done sequentially." I had thought that this naive concept had passed
away into never never land, but I suppose not. I do not deny that MANY
parallel computations can be accomplished sequentially, yet not ALL
parallel computations can be made sequential. Those class of parallel
computations that cannot be accomplished sequentially are those that
involve the state of all variables in a single instant. This class
of parallelism often arises in sensor applications. It would not be
valid, for example, to raster-scan (sequential computation) a sensing field
if the processing of that sensing field relied upon the quantization of
elements in a single instant.
I don't want to belabor this point, but it should be recognized
that the common assertion that all parallel computation can be done
sequentially is NOT ALWAYS VALID. In my own experience, I have found
that artificial intelligence (and real biologic intelligence for that
matter) relies heavily upon comparisons of various elements at a single
time instant. As such, the assumption of sequentialty of parallelistic
algorithms is often invalid. Something to think about.
------------------------------
Date: Saturday, 29 Oct 1983 21:05-PST
From: sdcrdcf!trw-unix!scgvaxd!qsi03!achut@rand-relay
Subject: Conciousness, Halting Problem, Intelligence
I am new to this mailing list and I see there is some lively
discussion going on. I am eager to contribute to it.
Consciousness:
I treat the words self-awareness, consciousness, and soul as
synonyms in the context of these discussions. They are all epiphenomena
of the phenomenon of intelligence, along with emotions, desires, etc.
To say that machines can never be truly intelligent because they cannot
have a "soul" is to be excessively naive and anthropocentric. Self-
awareness is not a necessary prerequisite for intelligence; it arises
naturally *because* of intelligence. All intelligent beings possess some
degree of self-awareness; to perceive and interact with the world, there
must be an internal model, and this invariably involves taking into
account the "self". A very, very low intelligence, like that of a plant,
will possess a very, very low self-awareness.
Parallelism:
The human brain resembles a parallel machine more than it does a
purely sequential one. Parallel machines can do many things much quicker
than their sequential counterpart. Parallel hardware may well make the
difference between the attainment of AI in the near future and the
unattainment for several decades. But I cannot understand those who claim
that there is something *fundamentally* different between the two types of
architectures. I am always amazed at the extremes to which some people will
go to find the "magic spark" which separates intelligence from non-
intelligence. Two of these are "continuousness vs. discreteness" and
"non-determinism vs. determinism".
Continuous? Nothing in the universe is continuous. (Except maybe
arguments to the contrary :-)) Mass, energy, space and even time, at least
according to current physical knowledge, are all quantized. Non-determinism?
Many people feel that "randomness" is a necessary ingredient to intelligence.
But why isn't this possible with a sequential architecture? I can
construct a "discrete" random number generator for my sequential machine
so that it behaves in a similar manner to your "non-deterministic" parallel
machine, although perhaps slower. (See "Slow intelligence" below)
Perhaps the "magic sparkers" should consider that difference they are
searching for is merely one of complexity. (I really hate to use the
word "merely", since I appreciate the vast scope of the complexity, but
it seems appropriate here) There is no evidence, currently, to justify
thinking otherwise.
The Halting(?) Problem:
What Stan referred to as the "Halting Problem" is really
the "looping problem", hence the subsequent confusion. The Halting Problem
is not really relevant to AI, but the looping problem *is* relevant. The
question is not even "why don't humans get caught in loops", since, as
Mr. Frederking aptly points out, "beings which aren't careful about this
fail to breed, and are weeded out by evolution". (For an interesting story
of what could happen if this were not the case, see "Riddle of the universe
and its solution" by Christoper Cerniak in "The Mind's I") But rather, the
more interesting question is "by what mechanisms do humans avoid them?",
and then, "are these the best mechanisms to use in AI programs?".
It not clear that this might not be a problem when AI is attempted on a
machine whose internal states could conceivably recur. Now I am not saying
that this an insurmountable problem by any means; I am merely saying that
it might be a worthy topic of discussion.
Slow intelligence:
Intelligence is dependent on time? This would require a curious
definition of intelligence. Suppose you played chess at strength 2000 given
5 seconds per move, 2010 given 5 minutes, and 2050 given as much time as you
desired. Suppose the corresponding numbers for me were 1500, 2000, and 2500.
Who is the better (more intelligent) player? True, I need 5 minutes per
move just to play as good as you can at only 5 seconds. But shouldn't the
"high end" be compared instead? There are many bases on which to decide the
"greater" of two intelligences. One is (conceivably, but not exclusively)
speed. Another is number and power of inferences it can make in a given
situation. Another is memory, and ability to correlate current situations
with previous ones. STRAZ@MIT-OZ has the right idea. Incidentally, I'm
surprised that no one pointed out an example of an intelligence staring
us in the face which is slower but smarter than us all, individually.
Namely, this net!
------------------------------
Date: 25 Oct 83 13:34:02-PDT (Tue)
From: harpo!eagle!mhuxl!ulysses!cbosgd!cbscd5!pmd @ Ucb-Vax
Subject: Artificial Consciousness? [and Reply]
I'm interested in getting some feedback on some philosophical
questions that have been haunting me:
1) Is there any reason why developments in artificial intelligence
and computer technology could not someday produce a machine with
human consciousness (i.e. an I-story)?
2) If the answer to the above question is no, and such a machine were
produced, what would distinguish it from humans as far as "human"
rights were concerned? Would it be murder for us to destroy such a
machine? What about letting it die of natural (?) causes if we
have the ability to repair it indefinitely?
(Note: Just having a unique, human genetic code does not legally make
one human as per the 1973 *Row vs Wade* Supreme Court decision on
abortion.)
Thanks in advance.
Paul Dubuc
[For an excellent discussion of the rights and legal status of AI
systems, see Marshal Willick's "Artificial Intelligence: Some Legal
Approaches and Implications" in the Summer '83 issue (V. 4, N. 2) of
AI magazine. The resolution of this issue will of course be up to the
courts. -- KIL]
------------------------------
Date: 28 Oct 1983 21:01-PDT
From: fc%usc-cse%USC-ECL@SRI-NIC
Subject: Halting in learning programs
If you restrict the class of things that can be learned by your
program to those which don't cause infinite recursion or circularity,
you will have a good solution to the halting problem you state.
Although generalized learning might be nice, until we know more about
learning, it might be more appropriate to select specific classes of
adaption which lend themselves to analysis and development of new
theories.
As a simple example of a non halting problem learning automata,
the Purr Puss system developed by John Andreas (from New Zealand) does
an excellent job of learning without any such difficulty. Other such
systems exist as well, all you have to do is look for them. I guess the
point is that rather than pursue the impossible, find something
possible that may lead to the solution of a bigger problem and pursue
it with the passion and rigor worthy of the problem. An old saying:
'Problems worthy of attack prove their worth by fighting back'
Fred
------------------------------
Date: Sat, 29 Oct 83 13:23:33 CDT
From: Bob.Warfield <warbob.rice@Rand-Relay>
Subject: Halting Problem Discussion
It turns out that any computer program running on a real piece of hardware
may be simulated by a deterministic finite automaton, since it only has a
finite (but very large) number of possible states. This is usually not a
productive observation to make, but it does present one solution to the
halting problem for real (i.e. finite) computing hardware. Simulate the
program in question as a DFA and look for loops. From this, one should
be able to tell what input to the DFA would produce an infinite loop,
and recognition of that input could be done by a smaller DFA (the old
one sans loops) that gets incorporated into the learning program. It
would run the DFA in parallel (or 1 step ahead?) and take action if a
dangerous situation appeared.
Bob Warfield
warbob@rice
------------------------------
Date: Mon 31 Oct 83 15:45:12-PST
From: Calton Pu <CALTON@WASHINGTON.ARPA>
Subject: Halting Problem: Resource Use
From Shebs@Utah-20:
The question is this: consider a learning program, or any
program that is self-modifying in some way. What must I do
to prevent it from getting caught in an infinite loop, or a
stack overflow, or other unpleasantnesses? ...
How can *it* know when it's stuck in a losing situation?
Trying to come up with a loop detector program seemed to find few enthusiasts.
The limited loop detector suggests another approach to the "halting problem".
The question above does not require the solution of the halting problem,
although that could help. The question posed is one of resource allocation
and use. Self-awareness is only necessary for the program to watch itself
and know whether it is making progress considering its resource consumption.
Consequently it is not surprising that:
The best answers I saw were along the lines of an operating
system design, where a stuck process can be killed, or
pushed to the bottom of an agenda, or whatever.
However, Stan wants more:
Workable, but unsatisfactory. In the case of an infinite
loop (that nastiest of possible errors), the program can
only guess that it has created a situation where infinite
loops can happen.
The real issue here is not whether the program is in loop, but whether the
program will be able to find a solution in feasible time. Suppose a program
will take a thousand years to find a solution, will you let it run that long?
In other words, the problem is one of measuring gained progress versus
spent resources. It may turn out that a program is not in loop but you
choose to write another program instead of letting the first run to completion.
Looping is just one of the losing situations.
Summarizing, the learning program should be allowed to see a losing situation
because it is unfeasible, whether the solution is possible or not.
From this view, there are two aspects to the decision: the measurement of
progress made by the program, and monitoring resource consumption.
It is the second aspect that involves some "operating systems design".
I would be interested to know whether your parser knows it is making progress.
-Calton-
Usenet: ...decvax!microsoft!uw-beaver!calton
------------------------------
Date: 31 Oct 83 2030 EST
From: Dave.Touretzky@CMU-CS-A
Subject: forwarded article
- - - - Begin forwarded message - - - -
Date: 31 Oct 1983 18:41 EST (Mon)
From: Daniel S. Weld <WELD%MIT-OZ@MIT-MC.ARPA>
To: macmol%MIT-OZ@MIT-MC.ARPA
Subject: Molecular Computers
Below is a forwarded message:
From: David Rogers <DRogers at SUMEX-AIM.ARPA>
I have always been confused by the people who work on
"molecular computers", it seems so stupid. It seems much
more reasonable to consider the reverse application: using
computers to make better molecules.
Is anyone out there excited by this stuff?
MOLECULAR COMPUTERS by Lee Dembart, LA Times
(reprinted from the San Jose Mercury News 31 Oct 83)
SANTA MONICA - Scientists have dreamed for the past few years of
building a radically different kind of computer, one based on
molecular reactions rather than on silicon.
With such a machine, they could pack circuits much more tightly than
they can inside today's computers. More important, a molecular
computer might not be bound by the rigid binary logic of conventional
computers.
Biological functions - the movement of information within a cell or
between cells - are the models for molecular computers. If that basic
process could be reproduced in a machine, it would be a very powerful
machine.
But such a machine is many, many years away. Some say the idea is
science fiction. At the moment, it exists only in the minds of of
several dozen computer scientists, biologists, chemists and engineers,
many of whom met here last week under the aegis of the Crump Institute
for Medical Engineering at the University of California at Los
Angeles.
"There are a number of ideas in place, a number of technologies in
place, but no concrete results," said Michael Conrad, a biologist and
computer scientist at Wayne State University in Detroit and a
co-organizer of the conference.
For all their strengths, today's digital computers have no ability to
judge. They cannot recognize patterns. They cannot, for example,
distinguish one face from another, as even babies can.
A great deal of information can be packed on a computer chip, but it
pales by comparison to the contents of the brain of an ant, which can
protect itself against its environment.
If scientists had a computer with more flexible logic and circuitry,
they think they might be able to develop "a different style of
computing", one less rigid than current computers, one that works more
like a brain and less like a machine. The "mood" of such a device
might affect the way scientists solve problems, just as people's moods
affect their work.
The computing molecules would be manufactured by genetically
engineered bacteria, which has given rise to the name "biochip" to
describe a network of them.
"This is really the new gene technology", Conrad said.
The conference was a meeting on the frontiers - some would say fringes
- of knowledge, and several times participants scoffed, saying that
the discussion was meandering into philosophy.
The meeting touched on some of the most fundamental questions of brain
and computer research, revealing how little is known of the mind's
mechanisms.
The goal of artificial intelligence work is to write programs that
simulate thought on digital computers. The meeting's goal was to think
about different kinds of computers that might do that better.
Among the questions posed at the conference:
- How do you get a computer to chuckle at a joke?
- What is the memory capacity of the brain? Is there a limit to that
capacity?
- Are there styles of problem solving that are not digitally
computable?
- Can computer science shed any light on the mechanisms of biological
science? Can computer science problems be addressed by biological
science mechanisms?
Proponents of molecular computers argue that it is possible to make
such a machine because biological systems perform those processes all
the time. Proponents of artificial intelligence have argued for years
that the existence of the brain is proof that it is possible to make a
small machine that thinks like a brain.
It is a powerful argument. Biological systems already exist that
compute information in a better way than digital computers do. "There
has got to be inspiration growing out of biology", said F. Eugene
Yates, the Crump Institutes director.
Bacteria use sophisticated chemical processes to transfer information.
Can that process be copied?
Enzymes work by stereoscopically matching their molecules with other
molecules, a decision-making process that occurs thousands of times a
second. It would take a binary computer weeks to make even one match.
"It's that failure to do a thing that an enzyme does 10,000 times a
second that makes us think there must be a better way," Yates said.
In the history of science, theoretical progress and technological
progress are intertwined. One makes the other possible. It is not
surprising, therefore, that thinking about molecular computers has
been spurred recently by advances in chemistry and biotechnology that
seem to provide both the materials needed and a means for producing it
on a commercial scale.
"If you could design such a reaction, you could probably get a
bacteria to make it," Yates said.
Conrad thinks that a functioning machine is 50 years away, and he
described it as a "futuristic" development.
- - - - End forwarded message - - - -
------------------------------
End of AIList Digest
********************
∂02-Nov-83 0939 @SRI-AI.ARPA:desRivieres.PA@PARC-MAXC.ARPA CSLI Activities for Thursday Nov. 3rd
Received: from SRI-AI by SU-AI with TCP/SMTP; 2 Nov 83 09:39:24 PST
Received: from PARC-MAXC.ARPA by SRI-AI.ARPA with TCP; Wed 2 Nov 83 09:36:37-PST
Date: Wed, 2 Nov 83 09:25 PST
From: desRivieres.PA@PARC-MAXC.ARPA
Subject: CSLI Activities for Thursday Nov. 3rd
To: csli-friends@SRI-AI.ARPA
Reply-to: desRivieres.PA@PARC-MAXC.ARPA
CSLI SCHEDULE FOR THURSDAY, NOVEMBER 3rd, 1983
10:00 Research Seminar on Natural Language
Speaker: Ivan Sag (Stanford)
Topic: "Phrase Structure Analysis: Coordination
and Unbounded Dependencies"
Place: Redwood Hall, room G-19
12:00 TINLunch
Discussion leader: Ron Kaplan (CSLI-Xerox)
Paper for discussion: "How are grammars represented?"
by Edward Stabler,
BBS 6, pp. 391-421, 1983.
Place: Ventura Hall
2:00 Research Seminar on Computer Languages
Speaker: Carolyn Talcott (Stanford)
Title: "Symbolic computation - a view of LISP and
related systems"
Place: Redwood Hall, room G-19
3:30 Tea
Place: Ventura Hall
4:15 Colloquium
Speaker: Glynn Winskel (CMU)
Title: "The Semantics of a Simple Programming Language"
Place: Redwood Hall, room G-19
Note to visitors:
Redwood Hall is close to Ventura Hall on the Stanford Campus. It
can be reached from Campus Drive or Panama Street. From Campus Drive
follow the sign for Jordan Quad. Parking is in the C-lot between
Ventura and Jordan Quad.
∂02-Nov-83 0955 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #45
Received: from SU-SCORE by SU-AI with TCP/SMTP; 2 Nov 83 09:54:59 PST
Date: Tuesday, November 1, 1983 2:01PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #45
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Wednesday, 2 Nov 1983 Volume 1 : Issue 45
Today's Topics:
Implementations - Assert & Setof & POPLOG,
User Convenience Vs. Elegance
LP Library - Update
----------------------------------------------------------------------
Return-Path: <PEREIRA@SRI-AI.ARPA>
Date: Tue 1 Nov 83 08:39:12-PST
From: Pereira@SRI-AI
Subject: 'setof' Again ...
To: prolog@SU-SCORE.ARPA
Some contributors to this Digest have suggested that using 'setof'
rather than 'findall' is just a matter of taste, thus justifying
their use of the name 'setof' for 'findall'. In previous notes,
I gave clear examples that show this is NOT a matter of taste:
- 'setof' has a proper semantics in terms of finite failure,
'findall' does not;
- interchanging goals in a program with 'findall' may produce
different solutions, but not with 'setof'.
Warren's 'setof' is RIGHT in a way that 'findall' isn't. Please
stop muddling a simple matter !
-- Fernando Pereira
PS: Now don't come back and start arguing about what 'right' means.
We've been subjected to more than enough Humpty Dumpty semantics.
------------------------------
Date: Tue 1 Nov 83 06:17:55-PST
From: SHardy@SRI-KL
Subject: Use of Assert
A recent message suggested that users wanting to use ASSERT and
RETRACT (to model, say, a network) should not use the built in
definitions but instead re-implement those operations using, the
message said, a tree like term to represent a personal database,
I.e.:
tree(PROPOSITION, LEFTSUBTREE, RIGHTSUBTREE).
This tree would be passed as an explicit extra argument to all
predicates. This approach, it was suggested, is both cleaner and
more efficient than using ASSERT.
I have a comment and a question. The comment is that programs
are no cleaner for being written by users than by system programmers.
A program written using terms as a personal database will be just as
hard to analyse automatically (say) as a program using ASSERT and
RETRACT. The reason for this is that the TREE term has as its
intended interpretation a collection of propositions, unlike most
terms whose interpretation is some object in the real world.
Propositions, such as HUMAN(STEVE, TREE), start having bizarre
interpretations too. Mixing terms and meta-terms in the same
program always leads to confusion irrespective of who does it,
user or system.
The question is addressed mainly to Prolog implementors. Does
anyone know of a Prolog implementation that, like Genesereth's
MRS, uses different representations for stored propositions
depending on the use those propositions get ? For example, the
following code:
setq(L, R) :- retract(eval(L, R)), assert(eval(L, R)).
provides something close to assignment. A smart Prolog system
could recognize the code and store EVAL as a collection of
ordinary Von Neumann variables.
-- Steve Hardy,
Teknowledge
------------------------------
Date: Tue 1 Nov 83 06:42:43-PST
From: SHardy@SRI-KL
Subject: POPLOG
A recent message said that programming environments offering more
than one language, such as POPLOG or LM-PROLOG, are no solution to
difficulties in using pure Prolog, such as wanting to use ASSERT or
RPLACA. The message said such systems in fact make things worse
since they make it necessary to learn two or more programming
languages.
I can't speak for the LM-PROLOG people, but as a one time member
of the POPLOG team, I can say that our goals for POPLOG were user
convenience and efficiency - not finding solutions to the theoretical
problems of mixing meta level knowledge with base level knowledge
(E.g. using ASSERT).
We wanted, for example, to interact with our programs via a user
modifiable screen editor. Writing a screen editor in pure Prolog is
impossible. We wanted to process arrays of floating point numbers.
We wanted to be able to use HEARSAY-like agenda mechanisms. We
wanted to be able to write efficient compilers containing a minimum of
machine code. We wanted to do lots of things easier in LISP or POP-11
than Prolog.
We felt that professional programmers could be fluent in several
languages (E.g. Prolog, POP or LISP, C or FORTRAN, and assembler) and
would appreciate the choice of appropriate level for any particular
module of a large system.
-- Steve Hardy,
Teknowledge
------------------------------
Date: Monday, 31-Oct-83 22:16:53-GMT
From: Richard HPS (on ERCC DEC-10) <OKeefe.R.A. at EDXA>
Subject: Reply to Steve Hardy's Points
I'm afraid there is a problem with our mailing-list distribution
program, so I have missed some issues of the Prolog Digest. That is
why I am replying to Steve Hardy's points only now. He first says
Recently, I read of a new implementation of Prolog. It had an
exciting new lazy evaluation mode. It could outperform DEC-10
Prolog. What is more, it had access to all sorts of good things
like screen editors and windows.
Except for outperforming DEC-10 Prolog, this sounds a lot like
LM-Prolog. Give me a Lisp Machine, and I'll order a copy of
LM-Prolog that day. Go on, I dare you: give me a Lisp Machine !
Unfortunately, its definition of Bagof was ``wrong'', that is
didn't agree with the definition of Bagof on the DEC-20.
LM-Prolog has a primitive called "collect" which is more powerful
than bagof. Its implementor has been fair to others (by not calling
it bagof, which it isn't) and himself (it's better). I don't want
to criticise any Prolog implementor's work, not even Micro-Prolog.
My cry is "Down with Humpty-Dumpty!", that's all.
Actually, this doesn't bother me since I think DEC-20 Prolog
has it wrong. As Richard says, it depends on what one thinks
should happen to calls like:
?- bagof(X, likes(X, Y), LIKERS).
Should LIKERS be the bag of all Xs that like anything or should
it be the bag of all Xs that like the same thing with failure
generating a new set ?
The interpretation I prefer is the first; it should be the set
of all Xs who like anything.
I understand how others may disagree with my preference. I don't
understand how one could think one interpretation `objectively'
right and the other wrong.
There is just a little Edinburgh imperialism underlying
Richard's messages !
But bagof satisfies BOTH groups. I can get the set of all Xs who
like the same Y by writing the question as it stands, and he can
get the set of people who like anything by writing Y↑likes(X,Y).
My objectivity in claiming that bagof is more powerful than findall
is beyond dispute. Since Steve Hardy seems to regard asserting
someone else's prior claim to a name "imperialistic", perhaps I
will be allowed to start a company and call it Teknowledge ?
His postscript says
it is a mistake to have Assert/Retract modify the behaviour
of currently active procedure calls. That's why the Newpay
example is so hard in DEC-10 Prolog. The solution is to
change DEC-10 Prolog.
As a matter of fact, it isn't the reason, but I am only too happy
to agree that modifying running code is a Bad Thing. Saying that
the solution is to change DEC-10 Prolog doesn't help. That was
the point of my original message, after all. The question is,
what do we change it TO ? What definition of assert and retract
does Steve Hardy have in mind that will be trouble free ? I would
be delighted to adopt any real solution that I can understand fully.
His next message says
I disagree with the proposal that built-in predicates should
emulate tables of assertions.
So he does, but on the grounds of type-checking. But there is a
Prolog type checker in {SU-SCORE}PS:<Prolog>TYPECH.PL, its
startup file is PROLOG.TYP. If Steve Hardy wants
succ(foo, X)
to give an error message, the answer is to pass his program
through the type checker, complete with type declaration
:- pred succ(integer, integer).
The best way to deal with errors of the succ(foo,X) sort is to
prove at compile-time that they can't happen. I do not claim that
the Mycroft and O'Keefe (mostly Mycroft) type checker is the last
word in Prolog type checking. That would be absurd. I know of at
least three improvements that could be made to it on its own, and
it should be properly integrated with the cross-referencers. But
at least it exists and is easily obtained.
That message ends by saying
Crucially, we have to decide whether Prolog is a practical
programming language (and so subject to occasional compromises)
or a concept too pure to be sullied by practical considerations.
The ``principal''(sic) by which implementors make decisions
should be ``what helps the user''.
We are in total agreement on the second paragraph. The difference
is that I am a humble programmer (in Dijkstra's sense). I know
that unless a language is very simple and very clean, I cannot use
it reliably. Show me an InterLisp manual and I turn pale. Show
me an Ada manual and I join the CND. "Keep it simple" is a very
practical request. Where DO people get the idea that programming
is done by Masters? I worked briefly in a successful software
house where most of the programmers were really struggling with
Pascal. I repeat, that was a *successful* software house.
His third message contains a masterpiece of misquotation.
A recent message on the use of Assert seemed to imply that it,
and Retract, shouldn;t be used because neither is well
implemented on the DEC-10 and both are, in fact, quite hard to
implement.
In fact assert and retract on the DEC-10 are within a factor of 3
(my guesstimate based on looking at the code and thinking about
assembly-language versions) of being as efficient as it is possible
for them to be. They are in general very easy to implement, and I
don't know of a Prolog that lacks them. What I claimed is that
1. We do not had a specification of what they SHOULD do, so
we cannot tell whether or not any implementation is correct.
2. The implementations in DEC-10 Prolog and C-Prolog (which, I
repeat, are reasonably efficient) are fairly surprise-free,
having been changed until ther results stopped being
surprising to experts, but the details are still difficult
to explain.
3. The existence of assert and retract has implications for other
parts of a Prolog system. A Prolog program which makes no use
of them will run slower in a system which supports data base
hacking than in one which does not, but is otherwise similar.
(Assuming the second system has made certain optimisations
blocked by assert and retract.)
4. We could go ahead and do the optimisations anyway. There is
essentially 0 implementation difficulty in doing this. The
trouble is that the result is almost impossible to explain.
Also, would it be correct (see 1) ?
He further says
Although Assert is`not very logical',it can be extremely useful.
True.
Without Assert one could not implement SetOf.
False.
There was a message in this Digest about POPLOG. Sent by Steve
Hardy. PopLog has setof. Does it use the database? No ! There
is also a findall (called fast←bagof) which similarly doesn't
use the data base. Neither of them is of course implemented in
Prolog, but then neither is assert.
Without SetOf all kinds of things (such as making use of a
closed world assumption) are hard.
True.
But (a) setof is my example (an operation with a clear definition
which can be implemented more efficiently without using the data
base), and (b) even with setof you can't make use of a closed
world assumption: finite failure (any instance of the Generator
which is not returned in the set is finitely failed) is strictly
weaker than the closed world assumption. (See John Lloyd's paper
"Foundations of Logic programming".) If anyone has a Prolog which
really uses the CWA, please tell this newsgroup AT ONCE, so we
can elect you to the pantheon.
Crucially, Prolog now has several classes of user. Some are
concerned with its purity and logical roots; others are
concerned with getting fast performance out of Prolog on
Von Neumann machines; others are concerned with using Prolog
to solve some problem.
Why should the last group be bothered by the concerns of the
first two ?
I think "non Von Neumann" was meant in the second class, assert
and retract are especially nasty on parallel machines, while we
can cope most of the time on serial machines. Of course the
last group should not be bothered by the concerns of the other
groups, provided, that is, that they are happy with
slow
buggy
programs and programming environments with no tools other than
editors, tracers, and buggy cross-referencers. Whence comes
this idea that Real Programmers need trash and only LongHaired
Pinkos want to understand things ? Why is it not considered
"practical" to want to have the best possible programming tools ?
Please recall that my original request was for descriptions of
useful operations which are *currently* implemented using the
data base, but which have a clear description of their own,
and which might be implemented another way. So far we have
setof and bagof -- David Warren
update -- me
assuming -- me (not doable with assert/retract)
copy -- me
queues -- Fernando Pereira
global variables -- folklore
Let's Have Some More !
Re: global variables: a new version of <Prolog>FlagRo.Pl will be
crossing the Atlantic soon.
I shall now go and try for the Nth time to read "Automated
Theorem Proving" by Wolfgang Bibel, turning for relief to "A
Discipline of Programming".
------------------------------
Date: Tue 1 Nov 83 13:59:15-PST
From: Chuck Restivo <Restivo@SU-SCORE>
Subject: LP Library Update
ListUt.Pl and FlagRo.Pl have been updated and are available at
{SU-SCORE} on PS:<Prolog> . For those readers who have read-only
access to the network, I have a limited number of hard copies that
could be mailed.
-- ed
------------------------------
End of PROLOG Digest
********************
∂02-Nov-83 1727 @SRI-AI.ARPA:YM@SU-AI Knowledge Seminar
Received: from SRI-AI by SU-AI with TCP/SMTP; 2 Nov 83 17:27:19 PST
Received: from SU-AI.ARPA by SRI-AI.ARPA with TCP; Wed 2 Nov 83 17:15:03-PST
Date: 02 Nov 83 1712 PST
From: Yoni Malachi <YM@SU-AI>
Subject: Knowledge Seminar
To: csli-friends@SRI-AI
CC: dkanerva@SRI-AI
∂02-Nov-83 0920 vardi@Diablo Knowledge Seminar
Received: from SU-HNV by SU-AI with PUP; 02-Nov-83 09:20 PST
Date: Wed, 2 Nov 83 09:17 PST
From: Moshe Vardi <vardi@Diablo>
Subject: Knowledge Seminar
We are planning to start at IBM San Jose a research seminar on theoretical
aspects of reasoning about knowledge, such as reasoning with incomplete
information, reasoning in the presence of inconsistencies, and reasoning about
changes of belief. The first few meetings are intended to be introductory
lectures on various attempts at formalizing the problem, such as modal logic,
nonmonotonic logic, and relevance logic. There is a lack of good research in
this area, and the hope is that after a few introductory lectures, the format of
the meetings will shift into a more research-oriented style. The first meeting
is scheduled for Friday, Nov. 18, at 1:30, with future meetings also to be held
on Friday afternoon, but this may change if there are a lot of conflicts. The
first meeting will be partly organizational in nature, but there will also be a
talk by Joe Halpern on "Applying modal logic to reason about knowledge and
likelihood".
For further details contact:
Joe Halpern [halpern.ibm-sj@rand-relay, (408) 256-4701]
Yoram Moses [yom@sail, (415) 497-1517]
Moshe Vardi [vardi@su-hnv, (408) 256-4936]
∂02-Nov-83 2049 @SRI-AI.ARPA:ADavis@SRI-KL.ARPA Center for Language Mailing List
Received: from SRI-AI by SU-AI with TCP/SMTP; 2 Nov 83 20:49:08 PST
Received: from SRI-KL.ARPA by SRI-AI.ARPA with TCP; Wed 2 Nov 83 19:08:18-PST
Date: Wed 2 Nov 83 16:52:49-PST
From: Al Davis <ADavis at SRI-KL>
Subject: Center for Language Mailing List
To: csli-friends at SRI-AI
Can you put me on it?? I am presently running the AI Architecture group
at FLAIR.
al
-------
∂02-Nov-83 2109 CLT SPECIAL ANNOUNCEMENT
To: "@DIS.DIS[1,CLT]"@SU-AI
A COMMEMORATIVE MEETING
co-sponsored by the departments of
Mathematics and Philosophy
for
ALFRED TARSKI
to be held on Monday, November 7, 1983
from 5:05pm to 6:05pm
in Jordan Hall (Psychology) 41
SPEAKERS
Jon Barwise
Sol Feferman
Pat Suppes
∂03-Nov-83 0020 @SRI-AI.ARPA:vardi%SU-HNV.ARPA@SU-SCORE.ARPA Knowledge Seminar
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Nov 83 00:19:53 PST
Received: from SU-SCORE.ARPA by SRI-AI.ARPA with TCP; Thu 3 Nov 83 00:19:08-PST
Received: from Diablo by Score with Pup; Thu 3 Nov 83 00:06:30-PST
Date: Thu, 3 Nov 83 00:06 PST
From: Moshe Vardi <vardi%Diablo@SU-Score>
Subject: Knowledge Seminar
To: csli-friends@sri-ai
We are planning to start at IBM San Jose a research seminar
on theoretical aspects of reasoning about knowledge,
such as reasoning with incomplete information, reasoning in the presence
of inconsistencies, and reasoning about changes of belief. The first
few meetings are intended to be introductory lectures on various attempts
at formalizing the problem, such as modal logic, nonmonotonic logic, and
relevance logic. There is a lack of good research in this area, and
the hope is that after a few introductory lectures, the
format of the meetings will shift into a more research-oriented style.
The first meeting is scheduled for Friday, Nov. 18, at 1:30,
with future meetings also to be held on Friday afternoon, but this may
change if there are a lot of conflicts. The first meeting will be partly
organizational in nature, but there will also be a talk by Joe Halpern
on "Applying modal logic to reason about knowledge and likelihood".
For further details contact:
Joe Halpern (halpern.ibm-sj@rand-relay, (408) 256-4701)
Yoram Moses (yom@su-hnv, (415) 497-1517
Moshe Vardi (vardi@su-hnv, (408) 256-4936)
If you want to be on the mailing list, contact Moshe Vardi
∂03-Nov-83 0224 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #46
Received: from SU-SCORE by SU-AI with TCP/SMTP; 3 Nov 83 02:24:26 PST
Date: Wednesday, November 2, 1983 11:42AM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #46
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Thursday, 3 Nov 1983 Volume 1 : Issue 46
Today's Topics:
Education - Teaching Prolog,
Implementations - User Convenience Vs. Elegance & Performance,
Query - WarPlan
----------------------------------------------------------------------
Date: Tuesday, 25-Oct-83 08:51:48-GMT
From: Bundy HPS (on ERCC DEC-10) <Bundy@EDXA>
Subject: Teaching Prolog
I have a research grant from the UK Social Science Research
Council to study methods of teaching Prolog, especially to
non-scientists who may lack a strong background in mathematics.
This grant funds a postdoc research fellow, Helen Pain. Our
first subgoal is to come up with a good 'story' to tell students
about how Prolog works. A wide (too wide) variety of such
stories can be found in Kowalski's logic for problem solving
book, and the Clocksin/Mellish primer. These include OR trees,
AND/OR trees, Byrd boxes, and several others.
I have produced a note (too big for the Digest) which discusses
and compares six such stories. We plan to build a modular story
which combines the best of all those we can find. Modular here
means that the full story will contain information on everything
you want the students to know, but different parts of this
information will be displayed according to the aspect you are
focussing on at any given time.
This message is to inform Prolog users of our project and to
seek further Prolog stories and feedback on the utility of
particular stories in teaching Prolog to different sorts of
students.
-- Alan Bundy
[ The Prolog stories are available at SU-SCORE as
PS:<Prolog>Bundy←LPStories.mss
and
Bundy←LPStories.Figures
The report can also be ordered from:
Alan Bundy
Department of Artificial Intelligence
8 Hope Park Square
Meadow Lane
Edinburgh, EH9 25G
Scotland -ed ]
------------------------------
Date: Tue, 1 Nov 83 09:08:04 PST
From: Bijan Arbab <V.Bijan@UCLA-LOCUS>
Subject: WarPlan
Does anyone out there know of recent modifications done to the
WarPlan program that appeared in `How To Solve It In Prolog' ?
I may add that there are some none trivial problems in the code
as it was printed in that book. I have gotten ride of almost all
of them but am not done yet !
All comments welcome,
-- Bijan
------------------------------
Date: Tue, 1 Nov 83 15:54:14 EST
From: Yoav Shoham <Shoham@YALE>
Subject: A Reply (NOT a Criticism)
Richard issued a "a reply to his critics", and I was surprised to
find myself listed among them. It is exactly because I prefer
"pure" code that I asked for a "pure" implementation of generated
lists, or lazy lists as he refers to them. His comments were
instructive, if a little pointed. To be more specific, let me
review some of them briefly:
1. "[copying] could be implemented very easily in C or MACRO-10"
- good, but in the meanwhile...
2. "It is not at all difficult to implement [copying] in Prolog
using "var" and "univ" " - right, and that's the pure solution
I alluded to originally. However, that solution (at least the
one I've managed to come up with) is very expensive, and about
90% of the time is spent on tearing structures apart and creating
new ones.
3. The assert and retract of the copying should not be seperated -
absolutely right. (The reason they ARE in my code is because
this code originated in a different task involving the
implementation of data-dependency in Prolog. There the copy is
tampered with in the intermediate code, and thus the seperation).
Again, Richard's remark is correct, and I've added the "copy"
predicate to my utility library.
4. The implentation of glists should make them look like ordinary
Prolog objects - that's exactly what my implementation does
(see next/2). next/1 is only optional if the list is to be
global, as one would want it to be in a data-dependency network.
If you don't like it, ignore it. In fact, Richard's implentation
does not pass enough of the structure around - see next and final
remark.
5. Richard's nice implementation - raises a few questions. First,
I don't know how apply/2 is defined; presumably it employs a
copying mechanism similar to mine (the "copy" predicate?). Is that
really the case? If so, it is a larger source of impurity than
the two mentioned by Richard. If "apply" doesn't import
additional impurity, I'm interested to see its definition.
Second, in the membership test Richard only passes around the
last element generated, so he couldn't create the list "all prime
numbers". (This deficiency is really easy to correct). Richard
also restricts Step to be fixed, so his implentation of the list
[1,2,4,7,11,...] will presumably be awkward. Finally, Richard's
use of the first argument as both input and output via the
unbound variable is more elegant than my straightforward solution.
------------------------------
Date: Monday, 31-Oct-83 11:49:55-GMT
From: Bundy HPS (on ERCC DEC-10) <Bundy@EDXA>
Subject: Reply About Reasonable Prolog Implementations
Date: 9 Oct 1983 11:43:51-PDT (Sunday)
From: Adrian Walker <ADRIAN.IBM@Rand-Relay>
Subject: Prolog question
IBM Research Laboratory K51
5600 Cottle Road
San Jose
CA 95193 USA
Telephone: 408-256-6999
ARPANet: Adrian.IBM@Rand-Relay
10th October 83
Alan,
In answer to your question about Prolog implementations, we
do most of our work using the Waterloo Prolog 1.3 interpreter
on an IBM mainframe (3081). Although not a traditional AI
environment, this turns out to be pretty good. For instance,
the speed of the Interpreter turns out to be about the same
as that of compiled DEC-10 Prolog (running on a DEC-10).
As for environment, the system delivered by Waterloo is
pretty much stand alone, but there are several good environments
built in Prolog on top of it.
A valuable feature of Waterloo Prolog 1.3 is a 'system' predicate,
which can call anything on the system, E.g. a full screen editor.
The work on extracting explanations of 'yes' and 'no' answers
from Prolog, which I reported at IJCAI, was done in Waterloo
Prolog. We have also implemented a syllogistic system called
SYLLOG, and several expert system types of applications. An
English language question answerer written by Antonio Porto and
me, produces instantaneous answers, even when the 3081 has 250
users.
As far as I know, Waterloo Prolog only runs under the VM operating
system (not yet under MVS, the other major IBM OS for mainframes).
It is available, for a moderate academic licence fee, from Sandra
Ward, Department of Computing Services, University of Waterloo,
Waterloo, Ontario, Canada.
We use it with IBM 3279 colour terminals, which adds variety to a
long day at the screen, and can also be useful !
Best wishes,
-- Adrian Walker
Walker, A. (1981). 'SYLLOG: A Knowledge Based Data Management
System,' Report No. 034. Computer Science Department, New York
University, New York.
Walker, A. (1982). 'Automatic Generation of Explanations of
Results from Knowledge Bases,' RJ3481. Computer Science
Department, IBM Research Laboratory, San Jose, California.
Walker, A. (1983a). 'Data Bases, Expert Systems, and PROLOG,'
RJ3870. Computer Science Department, IBM Research Laboratory,
San Jose, California. (To appear as a book chapter)
Walker, A. (1983b). 'Syllog: An Approach to Prolog for
Non-Programmers.' RJ3950, IBM Research Laboratory, San Jose,
Cal1fornia. (To appear as a book chapter)
Walker, A. (1983c). 'Prolog/EX1: An Inference Engine which
Explains both Yes and No Answers.'
RJ3771, IBM Research Laboratory, San Jose, Calofornia.
(Proc. IJCAI 83)
Walker, A. and Porto, A. (1983). 'KBO1, A Knowledge Based
Garden Store Assistant.'
RJ3928, IBM Research Laboratory, San Jose, California.
(In Proc Portugal Workshop, 1983.)
------------------------------
End of PROLOG Digest
********************
∂03-Nov-83 0901 DKANERVA@SRI-AI.ARPA Newsletter No. 7, November 3, 1983
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Nov 83 09:00:38 PST
Date: Thu 3 Nov 83 08:55:00-PST
From: DKANERVA@SRI-AI.ARPA
Subject: Newsletter No. 7, November 3, 1983
To: csli-folks@SRI-AI.ARPA
CSLI Newsletter
November 3, 1983 * * * Number 7
To cover the activities of the 16 projects of the Situated
Language Program in a balanced way in the newsletter, I need more
information from members of those projects about meetings, speakers,
and so forth. Please get such information to me for the newsletter by
Wednesday noon each week, so that others can get a good picture of the
work going on within the SL Program.
- Dianne Kanerva
* * * * * * *
MEETINGS OF PRINCIPALS AND ASSOCIATES
The meetings on Monday and Wednesday (Oct. 24 and 26) were quite
helpful. As a result of these meetings, and responses from you to
them, there have been several good ideas for changes put forward which
Betsy and I are pursuing. Also, as a result of discussions following
these meetings, I drew up a list of tentative committee assignments,
which is being circulated among the suggested committee members for
approval. The list includes the following committees:
Building Committee (permanent)
Computing Committee (permanent)
Education Committee (permanent)
Course-development subcommitee (fall and winter, 83-84)
Workstation Committee (permanent)
Approaches to Human Language Seminar (fall 83)
Approaches to Computer Languages Seminar (fall 83)
LISP-course seminar (winter 83-84)
Semantics of Natural Languages Seminar (winter, 83-84)
Anaphora Seminar (spring, 84)
Semantics of Computer Languages Seminar (spring, 84)
Computer Wizards Committee (83-84)
Colloquium (permanent)
Postdoctoral Committee (permanent)
Workshop Committees:
Kaplan workshop
ML workshop
COLING
Morphosyntax and lexical morphology
Lexical phonology
Long-range planning
Outreach Committee (permanent)
TINLunch (permanent)
Library Connection (83-84)
It seems like an overwhelming number of committees, and I welcome
suggestions for ways of reducing or eliminating the work to be done.
- Jon Barwise
* * * * * * *
! Page 2
* * * * * * *
CSLI SCHEDULE FOR *THIS* THURSDAY, NOVEMBER 3, 1983
10:00 Research Seminar on Natural Language
Speaker: Ivan Sag (Stanford)
Topic: "Phrase Structure Analysis: Coordination
and Unbounded Dependencies"
Place: Redwood Hall, room G-19
12:00 TINLunch
Discussion leader: Ron Kaplan (CSLI-Xerox)
Paper for discussion: "How Are Grammars Represented?"
by Edward Stabler,
BBS 6, pp. 391-421, 1983.
Place: Ventura Hall
2:00 Research Seminar on Computer Languages
Speaker: Carolyn Talcott (Stanford)
Title: "Symbolic Computation--A View of LISP and
Related Systems"
Place: Redwood Hall, room G-19
3:30 Tea
Place: Ventura Hall
4:15 Colloquium
Speaker: Glynn Winskel (CMU)
Title: "The Semantics of a Simple Programming Language"
Place: Redwood Hall, room G-19
Note to visitors:
Redwood Hall is close to Ventura Hall on the Stanford Campus. It
can be reached from Campus Drive or Panama Street. From Campus Drive
follow the sign for Jordan Quad. Parking is in the C-lot between
Ventura and Jordan Quad.
* * * * * * *
! Page 3
* * * * * * *
CSLI SCHEDULE FOR *NEXT* THURSDAY, NOVEMBER 10, 1983
10:00 Research Seminar on Natural Language
Speaker: Ron Kaplan (CSLI-Xerox)
Title: "Linguistic and Computational Theory"
Place: Redwood Hall, room G-19
12:00 TINLunch
Discussion leader: Martin Kay (CSLI-Xerox)
Paper for discussion: "Processing of Sentences with
Intra-sentential Code-switching"
by A.K. Joshi,
COLING 82, pp. 145-150.
Place: Ventura Hall
2:00 Research Seminar on Computer Languages
Speaker: Glynn Winskel (CMU)
Title: "The Semantics of Communicating Processes"
Place: Redwood Hall, room G-19
3:30 Tea
Place: Ventura Hall
4:15 Colloquium
Speaker: Michael Beeson (San Jose State University)
Title: "Computational Aspects of Intuitionistic Logic"
Place: Redwood Hall, room G-19
Note to visitors:
Redwood Hall is close to Ventura Hall on the Stanford Campus. It
can be reached from Campus Drive or Panama Street. From Campus Drive
follow the sign for Jordan Quad. Parking is in the C-lot between
Ventura and Jordan Quad.
* * * * * * *
! Page 4
* * * * * * *
SCHEDULE OF VISITORS
This coming week, CSLI is sponsoring a series of lectures by
Glynn Winskel, whose special interest is in the semantics of computer
languages. We are fortunate in that Winskel has also agreed to serve
on the Advisory Panel for CSLI. He is currently a member the Computer
Science Department at Carnegie-Mellon University and will be going to
Edinburgh in January. The schedule of his lectures appears below.
SPECIAL CSLI LECTURE SERIES BY GLYNN WINSKEL
CSLI announces a special series of lectures by Glynn Winskel, of
the CMU Computer Sciences Department, who will be visiting CSLI from
November 3 through 11. During his stay, he will be using room 27 in
Ventura Hall (497-1710). His lectures will be as follows:
1. The CSLI Colloquium, 4:15 p.m., Thursday, November 3, Redwood Hall
"The Semantics of a Simple Programming Language"
The operational and denotational semantics of a simple
programming language are presented and used to illustrate some basic
issues in the semantics of programming languages. I will try to show
how the more abstract concepts of denotational semantics connect with
more basic operational ideas. Specifically, I will define what it
means for the semantics to be equivalent and indicate briefly how to
prove them equivalent. I'll explain what it means for a denotational
semantics to be fully abstract with respect to an operational
semantics. Full abstraction is a useful criterion for the agreement
of denotational and operational semantics; it has been used
particularly in murky areas like the semantics of concurrency where at
present there is no generally accepted model. I'll motivate the basic
concepts of denotational semantics like complete partial orders
(cpo's) and the continuous functions on them.
2. Working Group in Semantics of Computer Languages,
9:30 a.m., Tuesday, November 8, at Xerox PARC.
Come to lower entrance at 9:25.
"The Semantics of Nondeterminism"
The programming language of the first talk will be extended to
admit nondeterminism. Starting from an operational semantics there
will be three natural equivalence relations between programs based on
their possible and inevitable behaviour. Accordingly when we move
over to the denotational semantics there will be three different power
domains with which to give the semantics. (Power domains are the cpo
analogue of powerset and they capture information about
nondeterministic behaviour of a computation, roughly the set of values
it produces.) With the intuitions secure (hopefully), we'll turn to a
more abstract treatment of power domains and show how they are used to
give denotational semantics to parallelism. In this talk both the
operational and denotational semantics will use the nondeterministic
interleaving (shuffling) of atomic actions to handle parallelism.
! Page 5
(Winskel lecture schedule, continued)
3. Approaches to Computer Languages Seminar, 2 p.m., Thursday,
November 10, Redwood Hall.
"The Semantics of Communicating Processes"
This talk is intended as an introduction to the work of Milner
and associates in Edinburgh and Hoare and associates in Oxford on
programming languages and semantics for communicating processes.
Milner's language Calculus of Communicating Systems (CCS) and Hoare's
Communicating Sequential Processes (CSP) are fairly similar. Both are
based on synchronized communication between processes.
4. Special meeting of C1 group, 3:30 p.m., Friday, November 11,
at SRI, conference room EL369. Visitors should come to the
entrance of Building E at 3:25 p.m.
"Event Structure Semantics of Communicating Processes"
An event structures consists of a set of events related by
causality relations specifying how an event depends for its occurrence
on the previous occurrence of events and how the occurrence of some
events excludes others. Here we focus on their use to give a
semantics to languages like CCS and CSP. Event structures capture
concurrency as causal independency and so give a noninterleaving model
of concurrent (or parallel) computations. Adopting a notion of
morphism appropriate to synchronizing processes we obtain a category
of event structures with categorical constructions closely related to
those constructions used by Milner and Hoare. We show how relations
between event structures and other models like Petri nets and some of
the interleaving models of Milner and Hoare, are expressed as
adjunctions.
* * * * * * *
WHY CONTEXT WON'T GO AWAY - FIFTH MEETING
On Tuesday, November 1, we held our fifth meeting in Ventura
Hall. The speaker was Howard Wettstein of the University of Notre
Dame. Presented below is the abstract of Wettstein's talk, "How to
Bridge the Gap Between Meaning and Reference."
Abstract: Direct reference theorists, opponents of Frege's
sense-reference picture of the connection between language and
reality, are divided on the question of the precise mechanism of such
connection. In this paper I restrict my attention to indexical
expressions and argue against both the causal theory of reference and
Donnellan's idea that reference is determined by the speaker's
intentions, and in favor of a more socially oriented view. Reference
is determined by the cues that are available to the competent
addressee.
NEXT WEEK, on Tuesday, November 8, at 3:15 p.m. in Ventura Hall,
the speaker will be Stanley Peters of CSLI.
* * * * * * *
! Page 6
* * * * * * *
A COMMEMORATIVE MEETING FOR
ALFRED TARSKI
Speakers: Jon Barwise, Sol Feferman, Pat Suppes
Cosponsored by the Departments of Mathematics and Philosophy
Monday, November 7, 5:05-6:05 p.m.
Room 41, Jordan Hall
* * * * * * *
SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
On Wednesday, November 2, Sol Feferman continued his seminar on
an introduction to "Reverse Mathematics." The talk continued the
survey, begun last week, of work by Friedman, Simpson, and others,
which provides sharp information in the form of equivalences as to
which set-existence axioms are needed to prove various statements in
analysis and algebra.
NEXT WEEK:
SPEAKER: Jose Meseguer, SRI
TOPIC: "Computability of Abstract Data Types"
TIME: Wednesday, November 9, 4:15-5:30 p.m.
PLACE: Stanford Mathematics Dept. Faculty Lounge (383-N)
* * * * * * *
CALL FOR PAPERS
West Coast Conference on Formal Linguistics
The third annual West Coast Conference on Formal Linguistics will
be held on March 16, 17, and 18, 1984, at the University of
California, Santa Cruz. Abstracts should be typed on one side only of
8 1/2 x 11 paper, with no identification of author or affiliation in
the heading, text, or references. Authors should be identified on a
separate 3 x 5 card containing the author's name, affiliation,
address, and phone number. Please send 8 copies of the abstract to:
WCCFL III, Linguistics, Cowell College, UCSC, Santa Cruz, CA 95064.
The deadline is Friday, December 16, 1983, and authors will be
notified in the second half of January about the acceptance of papers.
For further information, please contact Nancy Rankin, Syntax Research
Center, Cowell College, UCSC, Santa Cruz, CA 95064 (408-423-1597 or
408-429-2905).
* * * * * * *
! Page 7
* * * * * * *
COMPUTER SCIENCE COLLOQUIUM NOTICE, WEEK OF OCT 31 - NOV 4
11/01/1983 Talkware Seminar
Tuesday Kristen Nygaard
1:15-2:30 Univ. of Oslo & Norwegian Computing Ctr.
Bldg. 160, Rm. 268 SYDPOL: System Development and Profession-Oriented
Languages
11/01/1983 Computer Science Colloquium
Tuesday Dr. Jussi Ketonen
4:15 Stanford U. CS Dept.
Terman Aud. A View of Theorem-Proving: Developing Expert Systems
for Mathematical Reasoning
11/02/1983 Knowledge Representation Group
Wednesday Sam Holtzman
1:30-3:00
TC117 To Be Announced
11/03/1983 AFLB
Thursday Dr. J. M. Robson
12:30
MJH352 The Complexity of GO and Other Games
* * * * * * *
FAIRCHILD SEMINAR ANNOUNCEMENT
Speaker: Dennis Klatt, Massachusetts Institute of Technology
Topic: Rules for Deriving Segmental Durations in
American English Sentences
Date: Monday, November 7
Time: 11:00 a.m.
Place: Fairchild Laboratory for Artificial Intelligence Research
(Visitors call ext. 4282 from lobby for an escort)
Abstract: Rules for the derivation of segmental durations appropriate
for English sentences are presently included in Dectalk. The nature
of these rules, and how they were derived from by examination of a
moderate corpus of text will be described.
Note: Dectalk is a speech-synthesis-by-rule program offered as an
option on certain DEC terminals.
* * * * * * *
-------
∂03-Nov-83 0952 DKANERVA@SRI-AI.ARPA
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Nov 83 09:49:20 PST
Return-Path: <TW@SU-AI>
Received: from SU-AI.ARPA by SRI-AI.ARPA with TCP; Thu 3 Nov 83 09:35:06-PST
Date: 03 Nov 83 0933 PST
From: Terry Winograd <TW@SU-AI>
To: dkanerva@SRI-AI
ReSent-date: Thu 3 Nov 83 09:40:45-PST
ReSent-from: DKANERVA@SRI-AI.ARPA
ReSent-to: csli-friends@SRI-AI.ARPA
Talkware Seminar - CS 377
Date: November 9
Speaker:John McCarthy (Stanford CS)
Topic:A Common Business Communication Language
Time: 2:15 - 4
Place: 380Y (Math corner)
The problem is to construct a standard language for computers
belonging to different businesses to exchange business communications.
For example, a program for preparing bids for made-to-order
personal computer systems might do a parts explosion and
then communicate with the sales programs of parts suppliers.
A typical message might inquire about the price and delivery
of 10,000 of a certain integrated circuit. Answers to such
inquiries and orders and confirmations should be expressable
in the same language. In a military version, a headquarters
program might inquire how many airplanes of a certain kind
were in operating condition.
It might seem that constructing such a language is merely
a grubby problem in standardization suitable for a committee of
businessmen. However, it turns out that the problem actually
involves formalizing a substantial fragment of natural language.
What is wanted is the semantics of natural language, not the
syntax.
The lecture will cover the CBCL problem, examples of
what should be expressable, ideas for doing it and connections
of the problem to the semantics of natural language, mathematical
logic and non-monotonic reasoning.
Date: November 16
Speaker:Mike Genesereth (Stanford CS)
Topic:SUBTLE
Time: 2:15 - 4
Place: 380Y (Math corner)
Abstract:
No meeting November 23
Date: November 30
Speaker: Amy Lansky (Stanford / SRI)
Topic: GEM: a methodology for specifying concurrent systems
Time: 2:15 - 4
Place: 380Y (Math corner)
Abstract:
Date: December 7
Speaker: Donald Knuth (Stanford CS)
Topic: On the design of programming languages
Time: 2:15 - 4
Place: 380Y (Math corner)
Abstract:
Date: December 14
Speaker: Everyone
Topic:Summary and discussion
Time: 2:15 - 4
Place: 380Y (Math corner)
Abstract:
We will discuss the talks given during the quarter, seeing what kind of
picture of talkware emerges from them. We will also talk about
possibilities for next quarter. The interest (by both speakers and
the audience) so far indicates that we should continue it, eiither in
the same format or with changes.
∂03-Nov-83 1048 RIGGS@SRI-AI.ARPA Temporary Housing Offer
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Nov 83 10:48:02 PST
Date: Thu 3 Nov 83 10:47:36-PST
From: RIGGS@SRI-AI.ARPA
Subject: Temporary Housing Offer
To: CSLI-Folks@SRI-AI.ARPA
The Stanford Philosophy Department has referred an offer of
temporary housing to us which some of you may want to know about for
visiting scholars. Stan Rose at (415) 324-0457 is offering his home
January 1984 through June 1984. It is a luxury condominium, 2
bedroom, 2 1/2 baths at Menlo Towers. He is looking for an adult
couple with no children and will rent it for $1,000 a month.
-------
∂03-Nov-83 1624 BRODER@SU-SCORE.ARPA Puzzle
Received: from SU-SCORE by SU-AI with TCP/SMTP; 3 Nov 83 16:23:55 PST
Date: Thu 3 Nov 83 16:22:13-PST
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Puzzle
To: aflb.all@SU-SCORE.ARPA
Stanford-Office: MJH 325, Tel. (415) 497-1787
In the land of Oz there are n rich people; their investments are such
and the Oz stock market is such, that at any given moment if you
consider a pair of people, a,b, the probability that a is richer than
b = the probability that b is richer than a = 1/2. True or false: At
any given moment, for every person in the group, the probability that
s/he is the richest is exactly 1/n?
Have fun,
Andrei
-------
False to Broder's problem. Clearly a counterexample requires at least 3
people, and we can do it with 3. Call them the Tin Woodman, Dorothy and
the Cowardly Lion. The Tin Woodman and Dorothy at each moment flip a coin
to determine which has 0 and which has 1. The Cowardly Lion
conservatively always has 0.5 and is never the richest.
∂03-Nov-83 1710 LAWS@SRI-AI.ARPA AIList Digest V1 #88
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Nov 83 17:10:10 PST
Date: Thursday, November 3, 1983 1:09PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #88
To: AIList@SRI-AI
AIList Digest Thursday, 3 Nov 1983 Volume 1 : Issue 88
Today's Topics:
Molecular Computers - Comment,
Sequential Systems - Theoretical Sufficiency,
Humanness - Definition,
Writing Analysis - Reference,
Lab Report - Prolog and SYLLOG at IBM,
Seminars - Translating LISP & Knowledge and Reasoning
----------------------------------------------------------------------
Date: 1 Nov 83 1844 EST
From: Dave.Touretzky@CMU-CS-A
Subject: Comment on Molecular Computers
- - - - Begin forwarded message - - - -
Date: Tue, 1 Nov 1983 12:19 EST
From: DANNY%MIT-OZ@MIT-MC.ARPA
To: Daniel S. Weld <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Molecular Computers
I was at the Molecular Computer conference. Unfortunately, there has
very lttle progress since the Molecular Electronics conference a year
ago. The field is too full of people who think analog computation is
"more powerful" and who think that Goedel's proof shows that people
can always think better than machine. Sigh.
--danny
------------------------------
Date: Thursday, 3 November 1983 13:27:10 EST
From: Robert.Frederking@CMU-CS-CAD
Subject: Parallel vs. Sequential
Re: Phillip Kahn's claim that "not ALL parallel computations can be made
sequential": I don't believe it, unless you are talking about infinitely
many processing elements. The Turing Machine is the most powerful model of
computation known, and it is inherently serial (and equivalent to a
Tesselation Automaton, which is totally parallel). Any computation that
requires all the values at an "instant" can simply run at N times the
sampling rate of your sensors: it locks them, reads each one, and makes its
decisions after looking at all of them, and then unlocks them to examine the
next time slice. If one is talking practically, this might not be possible
due to speed considerations, but theoretically it is possible. So while at
a theoretical level ALL parallel computations can be simulated sequentially,
in practice one often requires parallelism to cope with real-world speeds.
------------------------------
Date: 2 Nov 83 10:52:22 PST (Wednesday)
From: Hoffman.es@PARC-MAXC.ARPA
Subject: Awareness, Human-ness
Sorry it took me a while to track this down. It's something I recalled
when reading the discussion of awareness in V1 #80. It's been lightly
edited.
--Rodney Hoffman
**** **** **** **** **** **** **** ****
From Richard Rorty's book, "Philosophy and The Mirror of Nature":
Personhood is a matter of decision rather than knowledge, an acceptance
of another being into fellowship rather than a recognition of a common
essence.
Knowledge of what pain is like or what red is like is attributed to
beings on the basis of their potential membership in the community.
Thus babies and the more attractive sorts of animal are credited with
"having feelings" rather than (like machines or spiders) "merely
responding to stimuli." To say that babies know what heat is like, but
not what the motion of molecules is like is just to say that we can
fairly readily imagine them opening their mouths and remarking on the
former, but not the latter. To say that a gadget that says "red"
appropriately *doesn't* know what red is like is to say that we cannot
readily imagine continuing a conversation with the gadget.
Attribution of pre-linguistic awareness is merely a courtesy extended to
potential or imagined fellow-speakers of our language. Moral
prohibitions against hurting babies and the better looking sorts of
animals are not based on their possessions of feeling. It is, if
anything, the other way around. Rationality about denying civil rights
to morons or fetuses or robots or aliens or blacks or gays or trees is a
myth. The emotions we have toward borderline cases depend on the
liveliness of our imagination, and conversely.
------------------------------
Date: 1 November 1983 18:55 EDT
From: Herb Lin <LIN @ MIT-ML>
Subject: writing analysis
You might want to take a look at some of the stuff by R. Flesch
who is the primary exponent of a system that takes word and sentence
and paragraph lengths and turns it into grade-equivalent reading
scores. It's somewhat controversial.
[E.g., The Art of Readable Writing. Or, "A New Readability Index",
J. of Applied Psychology, 1948, 32, 221-233. References to other
authors are also given in Cherry and Vesterman's writeup of the
STYLE and DICTION systems included in Berkeley Unix. -- KIL]
------------------------------
Date: Monday, 31-Oct-83 11:49:55-GMT
From: Bundy HPS (on ERCC DEC-10) <Bundy@EDXA>
Subject: Prolog and SYLLOG at IBM
[Reprinted from the Prolog Digest.]
Date: 9 Oct 1983 11:43:51-PDT (Sunday)
From: Adrian Walker <ADRIAN.IBM@Rand-Relay>
Subject: Prolog question
IBM Research Laboratory K51
5600 Cottle Road
San Jose
CA 95193 USA
Telephone: 408-256-6999
ARPANet: Adrian.IBM@Rand-Relay
10th October 83
Alan,
In answer to your question about Prolog implementations, we
do most of our work using the Waterloo Prolog 1.3 interpreter
on an IBM mainframe (3081). Although not a traditional AI
environment, this turns out to be pretty good. For instance,
the speed of the Interpreter turns out to be about the same
as that of compiled DEC-10 Prolog (running on a DEC-10).
As for environment, the system delivered by Waterloo is
pretty much stand alone, but there are several good environments
built in Prolog on top of it.
A valuable feature of Waterloo Prolog 1.3 is a 'system' predicate,
which can call anything on the system, E.g. a full screen editor.
The work on extracting explanations of 'yes' and 'no' answers
from Prolog, which I reported at IJCAI, was done in Waterloo
Prolog. We have also implemented a syllogistic system called
SYLLOG, and several expert system types of applications. An
English language question answerer written by Antonio Porto and
me, produces instantaneous answers, even when the 3081 has 250
users.
As far as I know, Waterloo Prolog only runs under the VM operating
system (not yet under MVS, the other major IBM OS for mainframes).
It is available, for a moderate academic licence fee, from Sandra
Ward, Department of Computing Services, University of Waterloo,
Waterloo, Ontario, Canada.
We use it with IBM 3279 colour terminals, which adds variety to a
long day at the screen, and can also be useful !
Best wishes,
-- Adrian Walker
Walker, A. (1981). 'SYLLOG: A Knowledge Based Data Management
System,' Report No. 034. Computer Science Department, New York
University, New York.
Walker, A. (1982). 'Automatic Generation of Explanations of
Results from Knowledge Bases,' RJ3481. Computer Science
Department, IBM Research Laboratory, San Jose, California.
Walker, A. (1983a). 'Data Bases, Expert Systems, and PROLOG,'
RJ3870. Computer Science Department, IBM Research Laboratory,
San Jose, California. (To appear as a book chapter)
Walker, A. (1983b). 'Syllog: An Approach to Prolog for
Non-Programmers.' RJ3950, IBM Research Laboratory, San Jose,
Cal1fornia. (To appear as a book chapter)
Walker, A. (1983c). 'Prolog/EX1: An Inference Engine which
Explains both Yes and No Answers.'
RJ3771, IBM Research Laboratory, San Jose, Calofornia.
(Proc. IJCAI 83)
Walker, A. and Porto, A. (1983). 'KBO1, A Knowledge Based
Garden Store Assistant.'
RJ3928, IBM Research Laboratory, San Jose, California.
(In Proc Portugal Workshop, 1983.)
------------------------------
Date: Mon 31 Oct 83 22:57:03-CST
From: John Hartman <CS.HARTMAN@UTEXAS-20.ARPA>
Subject: Fri. Grad Lunch - Understanding and Translating LISP
[Reprinted from the UTEXAS-20 bboard.]
GRADUATE BROWN BAG LUNCH - Friday 11/4/83, PAI 5.60 at noon:
I will talk about how programming knowledge contributes to
understanding programs and translating between high level languages.
The problems of translating between LISP and MIRROR (= HLAMBDA) will
be introduced. Then we'll look at the translation of A* (Best First
Search) and see some examples of how recognizing programming cliches
contributes to the result.
I'll try to keep it fairly short with the hope of getting critical
questions and discussion.
Old blurb:
I am investigating how a library of standard programming constructs
may be used to assist understanding and translating LISP programs.
A programmer reads a program differently than a compiler because she
has knowledge about computational concepts such as "fail/succeed loop"
and can recognize them by knowing standard implementations. This
recognition benefits program reasoning by creating useful abstractions and
connections between program syntax and the domain.
The value of cliche recognition is being tested for the problem of
high level translation. Rich and Temin's MIRROR language is designed
to give a very explicit, static expression of program information
useful for automatically answering questions about the program. I am
building an advisor for LISP to MIRROR translation which will exploit
recognition to extract implicit program information and guide
transformation.
------------------------------
Date: Wed, 2 Nov 83 09:17 PST
From: Moshe Vardi <vardi@Diablo>
Subject: Knowledge Seminar
[Forwarded by Yoni Malachi <YM@SU-AI>.]
We are planning to start at IBM San Jose a research seminar on
theoretical aspects of reasoning about knowledge, such as reasoning
with incomplete information, reasoning in the presence of
inconsistencies, and reasoning about changes of belief. The first few
meetings are intended to be introductory lectures on various attempts
at formalizing the problem, such as modal logic, nonmonotonic logic,
and relevance logic. There is a lack of good research in this area,
and the hope is that after a few introductory lectures, the format of
the meetings will shift into a more research-oriented style. The
first meeting is tentatively scheduled for Friday, Nov. 18, at 1:30,
with future meetings also to be held on Friday afternoon, but this may
change if there are a lot of conflicts. The first meeting will be
partly organizational in nature, but there will also be a talk by Joe
Halpern on "Applying modal logic to reason about knowledge and
likelihood".
For further details contact:
Joe Halpern [halpern.ibm-sj@rand-relay, (408) 256-4701]
Yoram Moses [yom@sail, (415) 497-1517]
Moshe Vardi [vardi@su-hnv, (408) 256-4936]
03-Nov-83 0016 MYV Knowledge Seminar
We may have a problem with Nov. 18. The response from Stanford to the
announcement is overwhelming, but have a room only for 25 people.
We may have to postpone the seminar.
To be added to the mailing list contact Moshe Vardi (MYV@sail,vardi@su-hnv)
------------------------------
End of AIList Digest
********************
∂03-Nov-83 1826 @SU-SCORE.ARPA:JMC@SU-AI
Received: from SU-SCORE by SU-AI with TCP/SMTP; 3 Nov 83 18:26:10 PST
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Thu 3 Nov 83 18:25:07-PST
Date: 03 Nov 83 1825 PST
From: John McCarthy <JMC@SU-AI>
To: aflb.all@SU-SCORE
False to Broder's problem. Clearly a counterexample requires at least 3
people, and we can do it with 3. Call them the Tin Woodman, Dorothy and
the Cowardly Lion. The Tin Woodman and Dorothy at each moment flip a coin
to determine which has 0 and which has 1. The Cowardly Lion
conservatively always has 0.5 and is never the richest.
∂03-Nov-83 2008 CLT SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
To: "@DIS.DIS[1,CLT]"@SU-AI
SPEAKER: Jose Meseguer, SRI
TITLE: COMPUTABILITY OF ABSTRACT DATA TYPES
TIME: Wednesday, November 9, 4:15-5:30 PM
PLACE: Stanford Mathematics Dept. Faculty Lounge (383-N)
Abstract Data Types (ADTs) are initial models in equationally defined classes
of algebras; they are widely used in current programming languages and
programming methodologies. The talk will discuss ADTs, some basic facts
about computable algebras, and recent characterization theorems for
computable ADTs.
Coming Events:
November 16, Yoram Moses
∂03-Nov-83 2114 @SRI-AI.ARPA:CLT@SU-AI SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Nov 83 21:14:29 PST
Received: from SU-AI.ARPA by SRI-AI.ARPA with TCP; Thu 3 Nov 83 21:14:27-PST
Date: 03 Nov 83 2008 PST
From: Carolyn Talcott <CLT@SU-AI>
Subject: SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
To: "@DIS.DIS[1,CLT]"@SU-AI
SPEAKER: Jose Meseguer, SRI
TITLE: COMPUTABILITY OF ABSTRACT DATA TYPES
TIME: Wednesday, November 9, 4:15-5:30 PM
PLACE: Stanford Mathematics Dept. Faculty Lounge (383-N)
Abstract Data Types (ADTs) are initial models in equationally defined classes
of algebras; they are widely used in current programming languages and
programming methodologies. The talk will discuss ADTs, some basic facts
about computable algebras, and recent characterization theorems for
computable ADTs.
Coming Events:
November 16, Yoram Moses
∂04-Nov-83 0029 LAWS@SRI-AI.ARPA AIList Digest V1 #89
Received: from SRI-AI by SU-AI with TCP/SMTP; 4 Nov 83 00:28:07 PST
Date: Thursday, November 3, 1983 4:59PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #89
To: AIList@SRI-AI
AIList Digest Friday, 4 Nov 1983 Volume 1 : Issue 89
Today's Topics:
Intelligence - Definition & Measurement & Necessity for Definition
----------------------------------------------------------------------
Date: Tue, 1 Nov 83 13:39:24 PST
From: Philip Kahn <v.kahn@UCLA-LOCUS>
Subject: Definition of Intelligence
When it comes down to it, isn't intelligence the ability to
recognize space-time relationships? The nice thing about this definition
is that it recognizes that ants, programs, and humans all possess
varying degrees of intelligence (that is, varying degrees in their
ability to recognize space-time relationships). This implies that
intelligence is only correlative, and only indirectly related to
physical environmental interaction.
------------------------------
Date: Tue, 1 Nov 1983 22:22 EST
From: SLOAN%MIT-OZ@MIT-MC.ARPA
Subject: Slow intelligence/chess
... Suppose you played chess at strength 2000 given 5 seconds
per move, 2010 given 5 minutes, and 2050 given as much time as
you desired...
An excellent point. Unfortunately wrong. This is a common error,
made primarily by 1500 players and promoters of chess toys. Chess
ratings measure PERFORMANCE at TOURNAMENT TIME CONTROLS (generally
ranging between 1.5 to 3 moves per minute). To speak of "strength
2000 at 5 seconds per move" or "2500 given as much time as desired" is
absolutely meaningless. That is why there are two domestic rating
systems, one for over-the-board play and another for postal chess.
Both involve time limits, the limits are very different, and the
ratings are not comparable. There is probably some correlation, but
the set of skills involved are incomparable.
This is entirely in keeping with the view that intelligence is
coupled with the environment, and involves a speed factor (you must
respond in "real-time" - whatever that happens to mean.) It also
speaks to the question of "loop-avoidance": in the real world, you
can't step in the same stream twice; you must muddle through, ready or
not.
To me, this suggests that all intelligent behavior consists of
generating crude, but feasible solutions to problems very quickly (so
as to be ready with a response) and then incrementally improving the
solution as time permits. In an ever changing environment, it is
better to respond inadequately than to ponder moot points.
-Ken Sloan
------------------------------
Date: Tue, 1 Nov 1983 10:15:54 EST
From: AXLER.Upenn-1100@Rand-Relay (David M. Axler - MSCF Applications
Mgr.)
Subject: Turing Test Re-visited
I see that the Turing Test has (not unexpectedly) crept back into the
discussions of intelligence (1:85). I've wondered a bit as to whether the
TT shouldn't be extended a bit; to wit, the challenge it poses should not only
include the ability to "pass" the test, but also the ability to act as a judge
for the test. Examining the latter should give us all sorts of clues as to
what preconceived notions we're imposing when we try to develop a machine or
program that satisfies only Turing's original problem
Dave Axler
------------------------------
Date: Wed, 2 Nov 1983 10:10 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Parallelism & Consciousness
What I meant is that defining intelligence seems as pointless as
defining "life" and then arguing whether viruses are alive instead of
asking how they work and solve the problems that appear to us to be
the interesting ones. Instead of defining so hard, one should look to
see what there is.
For example, about the loop-detecting thing, it is clear that in full
generality one can't detect all Turing machine loops. But we all know
intelligent people who appear to be caught, to some extent, in thought
patterns that appear rather looplike. That paper of mine on jokes
proposes that to be intelligent enough to keep out of simple loops,
the problem is solved by a variety of heuristic loop detectors, etc.
Of course, this will often deflect one from behaviors that aren't
loops and which might lead to something good if pursued. That's life.
I guess my complaint is that I think it is unproductive to be so
concerned with defining "intelligence" to the point that you even
discuss whether "it" is time-scale invariant, rather than, say, how
many computrons it takes to solve some class of problems. We want to
understand problem-solvers, all right. But I think that the word
"intelligence" is a social one that accumulates all sorts of things
that one person admires when observed in others and doesn't understand
how to do. No doubt, this can be narrowed down, with great effort,
e.g., by excluding physical; skills (probably wrongly, in a sense) and
so forth. But it seemed to me that the discussion here in AILIST was
going nowwhere toward understand intelligence, even in that sense.
In other words, it seems strange to me that there is no public
discussion of substantive issues in the field...
------------------------------
Date: Wed, 2 Nov 1983 10:21 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Intelligence and Competition
The ability to cope with a CHANGE
in the environment marks intelligence.
See, this is what's usually called adaptiveness. This is why you
don't get anywhere defining intelligence -- until you have a clear idea
to define. Why be enslaved to the fact that people use a word, unless
you're sure it isn't a social accumulation.
------------------------------
Date: 2 Nov 1983 23:44-PST
From: ISAACSON@USC-ISI
Subject: Re: Parallelism & Consciousness
From Minsky:
...I think that the word "intelligence" is a social one
accumulates all sorts of things that one person
admires observed in others and doesn't understand how to
do...
In other words, it seems strange to me that there
is no public discussion of substantive issues in the
field...
Exactly... I agree on both counts. My purpose is to help
crystallize a few basic topics, worthy of serious discussion, that
relate to those elusive epiphenomena that we tend to lump under
that loose characterization: "Intelligence". I read both your LM
and Jokes papers and consider them seminal in that general
direction. I think, though, that your ideas there need, and
certainly deserve, further elucidation. In fact, I was hoping
that you would be willing to state some of your key points to
this audience.
More than this. Recently I've been attracted to Doug
Hofstadter's ideas on subcognition and think that attention
should be paid to them as well. As a matter of fact, I see
certain affinities between you two and would like to see a good
discussion that centers on LM, Jokes, and Subcognition as
Computation. I think that, in combination, some of the most
promising ideas for AI are awaiting full germination in those
papers.
------------------------------
Date: Thu, 3 Nov 1983 13:17 EST
From: BATALI%MIT-OZ@MIT-MC.ARPA
Subject: Inscrutable Intelligence
From Minsky:
...I think that the word "intelligence" is a social one
that accumulates all sorts of things that one person
admires when observed in others and doesn't understand how to
do...
This seems like an extremely negative and defeatist thing to say.
What does it leave us in AI to do, but to ignore the very notion we
are supposedly trying to understand? What will motivate one line of
research rather than another, what can we use to judge the quality of
a piece of research, if we have no idea what it is we are after?
It seems to me that one plausible approach to AI is to present an
arguable account of what intelligence is about, and then to show that
some mechanism is intelligent according to that account. The account,
the "definition", of intelligence may not be intuitive to everyone at
first. But the performance of the mechanisms constructed in accord
with the account will constitute evidence that the account is correct.
(This is where the Turing test comes in, not as a definition of
intelligence, but as evidence for its presence.)
------------------------------
Date: Tue 1 Nov 83 13:10:32-EST
From: SUNDAR@MIT-OZ
Subject: parallelism and conciousness
[Forwarded by RickL%MIT-OZ@MIT-MC.]
[...]
It seems evident from the recent conversations that the meaning of
intelligence is much more than mere 'survivability' or 'adaptability'.
Almost all the views expressed however took for granted the concept of
"time"-which,seems to me is 'a priori'(in the Kantian sense).
What do you think of a view of that says :intelligence is the ability of
an organism that enables it to preserve,propagate and manipulate these
'a priori'concepts.
The motivation for doing so could be a simple pleasure,pain mechanism
(which again I feel are concepts not adequately understood).It would
seem that while the pain mechanism would help cut down large search
spaces when the organism comes up against such problems,the pleasure
mechanism would help in learning,and in the acquisition of new 'a priori'
wisdom.
Clearly in the case of organisms that multiply by fission (where the line
of division between parent and child is not exactly clear)the structure
of the organism may be preserved .In such cases it would seem that the
organism survives seemingly forever . However it would not be considered
intelligent by the definition proposed above .
The questions that seem interesting to me therefore are:
1 How do humans acquire the concept of 'time'?
2 'Change' seem to be measured in terms of time (adaptation,survival etc
are all the presence or absense of change) but 'time' itself seems to be
meaningless without 'change'!
3 How do humans decide that an organism is 'intelligent ' or not?
Seems to me that most of the people in the AIList made judgements (the
amoeba , desert tortoise, cockroach examples )which should mean that
they either knew what intelligence was or wasn't-but it still isn't
exactly clear after all the smoke's cleared.
Any comments on the above ideas? As a relative novice to the field
of AI I'd appreciate your opinions.
Thanks.
--Sundar--
------------------------------
Date: Thu, 3 Nov 1983 16:42 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Inscrutable Intelligence
Sure. I agree you want an account of what intelligence is "about".
When I complained about making a "definition" I meant
one of those useless compact thingies in dictionaries.
But I don't agree that you need this for scientific motivation.
Batali: do you really think Biologists need definitions of Life
for such purposes?
Finally, I simply don't think this is a compact phenomenon.
Any such "account", if brief, will be very partial and incomplete.
To expect a test to show that "the account is correct" depends
on the nature of the partial theory. In a nutshell, I still
don't see any use at all for
such definition, and it will lead to calling all sorts of
partial things "intelligence". The kinds of accounts to confirm
are things like partial theories that need their own names, like
heuristic search method
credit-assignment scheme
knowledge-representation scheme, etc.
As in biology, we simply are much too far along to be so childish as
to say "this program is intelligent" and "this one is not". How often
do you see a biologist do an experiment and then announce "See, this
is the secret of Life". No. He says, "this shows that enzyme
FOO is involved in degrading substrate BAR".
------------------------------
Date: 3 Nov 1983 14:45-PST
From: ISAACSON@USC-ISI
Subject: Re: Inscrutable Intelligence
I think that your message was really addressed to Minsky, who
already replied.
I also think that the most one can hope for are confirmations of
"partial theories" relating, respectively, to various aspects
underlying phenomena of "intelligence". Note that I say
"phenomena" (plural). Namely, we may have on our hands a broad
spectrum of "intelligences", each one of which the manifestation
of somewhat *different* mix of underlying ingredients. In fact,
for some time now I feel that AI should really stand for the
study of Artificial Intelligences (plural) and not merely
Artificial Intelligence (singular).
------------------------------
Date: Thu, 3 Nov 1983 19:29 EST
From: BATALI%MIT-OZ@MIT-MC.ARPA
Subject: Inscrutable Intelligence
From: MINSKY%MIT-OZ at MIT-MC.ARPA
do you really think Biologists need definitions of Life
for such purposes?
No, but if anyone was were claiming to be building "Artificial Life",
that person WOULD need some way to evaluate research. Remember, we're
not just trying to find out things about intelligence, we're not just
trying to see what it does -- like the biochemist who discovers enzyme
FOO -- we're trying to BUILD intelligences. And that means that we
must have some relatively precise notion of what we're trying to build.
Finally, I simply don't think this is a compact phenomenon.
Any such "account", if brief, will be very partial and incomplete.
To expect a test to show that "the account is correct" depends
on the nature of the partial theory. In a nutshell, I still
don't see any use at all for
such definition, and it will lead to calling all sorts of
partial things "intelligence".
If the account is partial and incomplete, and leads to calling partial
things intelligence, then the account must be improved or rejected.
I'm not claiming that an account must be short, just that we need
one.
The kinds of accounts to confirm
are things like partial theories that need their own names, like
heuristic search method
credit-assignment scheme
knowledge-representation scheme, etc.
But why are these thing interesting? Why is heuristic search better
than "blind" search? Why need we assign credit? Etc? My answer:
because such things are the "right" thing to do for a program to be
intelligent. This answer appeals to a pre-theoretic conception of
what intelligence is. A more precise notion would help us
assess the relevance of these and other methods to AI.
One potential reason to make a more precise "definition" of
intelligence is that such a definition might actually be useful in
making a program intelligent. If we could say "do that" to a program
while pointing to the definition, and if it "did that", we would have
an intelligent program. But I am far too optimistic. (Perhaps
"childishly" so).
------------------------------
End of AIList Digest
********************
∂04-Nov-83 0222 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #47
Received: from SU-SCORE by SU-AI with TCP/SMTP; 4 Nov 83 02:22:17 PST
Date: Thursday, November 3, 1983 9:54AM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #47
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Friday, 4 Nov 1983 Volume 1 : Issue 47
Today's Topics:
Implementations - User Convenience Vs. Elegance,
& DataBases & Rename
----------------------------------------------------------------------
Date: Wed 2 Nov 83 15:38:06-MST
From: Uday Reddy <U-Reddy@UTAH-20>
Subject: Purity of =..
Ref: Abbott, Purity, Prolog Digest, 1, 40, (20 Oct 83)
Richard, Reply to My Critics, Prolog Digest, 1, 44, ( 1 Nov 83)
The above references discussed the purity of =.. (UNIV). My position
is that =.. is not pure in either first-order or higher-order logic,
in the sense that it is "referentially opaque".
In the terminology of logicians, a "name" is used in a referentially
transparent way if its meaning is independent of the context in which
it appears. In particular, a name that is "used" (to denote some
semantic object) cannot be "mentioned" (to denote the name itself),
in order to preserve referential transparency.
In the context of programming, name means any syntactic object:
identifier, expression, term, clause, program, pointer or what have
you. We use these syntactic objects to denote semantic objects,
like, values, functions, and relations. As long as they are used
to denote semantic objects, their syntactic structure cannot be
"referenced" in the program. Classic cases of violations of this
principle are the use of variables in imperative languages and QUOTE
and EVAL of LISP.
Prolog has several features that violate referential transparency.
Some of them are var, functor, arg, univ and call. To see why,
consider the simple example
1 + 2 =.. [+, 1, 2]
Since 1+2 denotes a semantic object (the integer 3) its syntactic
structure should be transparent to the program. But using =.. allows
the program to look at its syntactic structure. 2+1 denotes the
semantic object as 1+2. But, replacing 1+2 by 2+1 in the above
literal does not preserve its truthful-ness.
Features like =.. can, however, be used in a referentially
transparent way. When a program manipulates syntactic objects without
using them to denote semantic objects, using these features is
perfectly acceptable. Examples of such programs are compilers and
theorem provers. It is an interesting exercise to design languages
which allow these features to be used only in a referentially
transparent way.
Arguments of the kind "all programs with a feature X can be
transformed into first-order programs; so programs with feature X
are first-order" used by Richard should be treated with scepticism.
Transformations do not preserve first-order-ness or any such property
of programs. All languages can be transformed into Turing machines.
It does not mean, of course, that all languages have the same
properties as those of Turing machines.
Once referential transparency is lost, there is really no point of
talking about the "order" of the language. univ and call (like eval
and quote of LISP) can be used to convert programs into data and data
into programs and anything is possible. Here is a definition of map
(a higher order relation) using univ and call.
map([],[],P).
map([A|X],[B|Y],P) :- Goal =.. [P,A,B], call(Goal), map(X,Y,P).
Whether you choose to call Prolog first-order or higher-order is your
choice. For me, it is neither.
-- Uday Reddy
------------------------------
Date: Wed, 2 Nov 83 14:31:34 PST
From: Faustus%UCBCory@Berkeley (Wayne A. Christopher)
Subject: Database Hacking Ideas
I have heard about Prolog systems that include facilities to
partition the database and to control the sections that are
searched when satisfying goals. Does anybody know of such systems ?
If not, does anybody have any ideas on who such a thing should
be implemented ? My idea is to have predicates to create and remove
database "modules", and to specify which ones are to be searched
and which one newly entered clauses are to be inserted. The
advantages I can see from this are more control over evaluation,
and faster searching of very large databases where the program
will be able to tell where the data it wants will be. Does anyone
have any more advantages to this scheme, disadvantages, or other
comments ?
-- Wayne Christopher
------------------------------
Date: Wednesday, 2-Nov-83 20:08:28-GMT
From: Richard HPS (on ERCC DEC-10) <OKeefe.R.A at EDXA>
Subject: What Should Rename Do ?
What should rename do ?
To start with, I find rename(X,[]) as a way of deleting a file
"cute" rather than clear. It is quite awkward, in fact, on UNIX,
where "[]" is a perfectly good file name. I don't think we'd be
doing Prolog any great harm if we split this into two:
rename(Old, New) - Old and New can be ANY object which
delete(Old) - name/2 accepts as first argument.
"2" and "3.456" are perfectly good file names in UNIX and Bottoms-10.
That's not the problem. The problem is what should happen to
Old if it is open ? In fact DEC-10 Prolog *insists* that Old should
be open, and closes it. The result is that if you had something like
p :-
see(fred),
q,
...
q :-
...
delete(fred)
...
delete(File) :-
seeing(Old),
see(File),
rename(File, []),
see(Old).
the second see/1 in delete/1 will try to reopen fred, and of course
won't find it. And the fact that rename/2 will set the input to
'user' is not obvious.
C-Prolog does not require that the old file be open. It just
goes ahead and does the rename. This has the extremely odd result
that
seeing(Current),
rename(Current, gotcha),
seeing(Input), % succeeds bindind Input=Current
see(Input) % fails!
It would be possible for C-Prolog to change the name of the
file in its own tables, so that seeing(Input) would bind Input=gotcha
and see(gotcha) would then be a no-op. Version 1.4a.EdAI may well do
that. The trouble is that the user's program might still be hanging
on to the old atom, as for example
input←redirected(File, Command) :-
exists(File),
seeing(Old),
see(File),
( call(Command), !, seen, see(Old)
; seen, see(Old), fail
).
How could the file module be expected to know that Old should be
changed when someone else renames it ?
There is also something we might call the "close problem". If
you have a program which is reading from some file, and you enter the
debugger, i/o in the debugger will be redirected to the terminal.
There is nothing then to stop you entering a break (in the DEC-10
debugger you don't even have to do that) and giving the command
close(X), where X just happens to be the file the program is reading
from... This used to crash C-Prolog. 1.4.EdAI and 1.4.SRI solve
this problem (using different code) by rejecting an attempt to close
a file that is open in a lower break state. I haven't dared to try
it in DEC-10 Prolog, but I would expect that the result would be for
the broken code to reopen the file from the beginning. Ugly.
The input-output system of DEC-10 Prolog was designed using the
principle "whatever helps the user is good". That is, nothing was
put in until there was a need for it, and then the simplest approach
that seemed to work was adopted. Unfortunately, PDP-11 Prolog, EMAS
Prolog, and C-Prolog have copied this "ad hack" solution in the name
of "compatibility". The Prolog component of PopLog has library code
to mimic this behaviour as well though it also has access to Pop11's
rather cleaner i/o.
Can we call a halt to this ? It seems clear that representing
files by their names is a mistake. We cannot in DEC-10 Prolog have
two pointers into the same file ! Streams of some sort have far
fewer problems. In particular they don't have the rename problem
or the close problem. Suggestions ?
------------------------------
End of PROLOG Digest
********************
∂04-Nov-83 0900 KJB@SRI-AI.ARPA announcments and items for newsletter
Received: from SRI-AI by SU-AI with TCP/SMTP; 4 Nov 83 09:00:09 PST
Date: Fri 4 Nov 83 08:55:08-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: announcments and items for newsletter
To: csli-folks@SRI-AI.ARPA
Let me second Diane's plea for more word about past and future
activitiies for the newsletter.
Here is an example of how it can pay off. The Newsletter is getting
unbelievably large circulation. Because of the short item I put in
reproting about the issues that arose in my talk about operational vs
denotational semantics, people at IBM in San Jose interested in these
issues have been in touch, and this led to the contact with the group
there on knowledge and action.
Besides, it will just make it more interesting reading if we get
reports of what has gone on.
Jon
-------
∂04-Nov-83 1438 HANS@SRI-AI.ARPA Job in Konstanz/Germany
Received: from SRI-AI by SU-AI with TCP/SMTP; 4 Nov 83 14:37:45 PST
Date: Fri 4 Nov 83 14:35:27-PST
From: Hans Uszkoreit <Hans@SRI-AI.ARPA>
Subject: Job in Konstanz/Germany
To: csli-friends@SRI-AI.ARPA
The Sonderforschungsbereich 99 at the University of Konstanz, West Germany
is looking for a Computer Scientist (starting January 1, 1984).
"Applicants should have experience with natural language systems,
familiarity with linguistics and artificial intelligence and a good
knowledge of French."
"Applications must be sent in by November 30, 1983 to:
Prof. Dr. Christoph Schwarze, Fachgruppe Sprachwissenschaft der
Universitaet Konstanz, Postfach 5560, D-7750 Konstanz."
-------
∂04-Nov-83 1536 HANS@SRI-AI.ARPA csli mail
Received: from SRI-AI by SU-AI with TCP/SMTP; 4 Nov 83 15:34:27 PST
Date: Fri 4 Nov 83 15:29:37-PST
From: Hans Uszkoreit <Hans@SRI-AI.ARPA>
Subject: csli mail
To: csli-friends@SRI-AI.ARPA
1. The person who is responsible for updating our mailing lists is now
Emma Pease. If you have suggestions for additions or changes of mailing
lists or addresses send them to CSLI-REQUESTS@SRI-AI. (Do NOT send
them to CSLI-FRIENDS.)
2. Please use the address CSLI-FRIENDS only for announcements of
general interest. Avoid unnecessary duplication by not sending long
messages to CSLI-FRIENDS if they will be sent to this address again
as part of the weekly newsletter.
Messages sent to this address will be distributed to about 150
people and might also be forwarded automatically to other
distribution lists or posted at special-topic bulletin boards.
3. These are our central mailing addresses:
address: distribution:
CSLI-FOLKS@SRI-AI CSLI-affiliated people
CSLI-PEOPLE@SRI-AI CSLI-affiliated people (same as CSLI-FOLKS)
CSLI-FRIENDS@SRI-AI CSLI affiliates + people who will be
invited for colloquia etc.
CSLI-PRINCIPALS@SRI-AI SL principals
CSLI-ADMINISTRATION@SRI-AI CSLI directors and staff
CSLI-BUILDING@SRI-AI CSLI building committee
CSLI-COMPUTING@SRI-AI CSLI computing committee
CSLI-EXECUTIVES@SRI-AI CSLI executive committee and directors
CSLI-REQUESTS@SRI-AI address changes, additions, gripes
4. Please, do not edit the corresponding mailing lists yourselves; a
short notice to CSLI-REQUESTS@SRI-AI is all it takes to get in your
changes.
5. In addition to the central mailing addresses, we have 16 addresses
for the individual research project groups A1,...,D4. If a project
group wants to use their mailing address for distributing mail to all
members of the group, then the group needs to update the corresponding
mailing list. For instructions look at <CSLI>group-mail.info.
(If nobody in your group has an account on the SRI-AI machine, send a
message to CSLI-REQUESTS@SRI-AI for help.)
Gripes, suggestions, requests to CSLI-REQUESTS@SRI-AI.
-------
∂04-Nov-83 1943 @SRI-AI.ARPA:vardi%SU-HNV.ARPA@SU-SCORE.ARPA Knowledge Seminar
Received: from SRI-AI by SU-AI with TCP/SMTP; 4 Nov 83 19:43:01 PST
Received: from SU-SCORE.ARPA by SRI-AI.ARPA with TCP; Fri 4 Nov 83 19:42:07-PST
Received: from Diablo by Score with Pup; Fri 4 Nov 83 19:40:24-PST
Date: Fri, 4 Nov 83 19:28 PST
From: Moshe Vardi <vardi%Diablo@SU-Score>
Subject: Knowledge Seminar
To: alpert@su-score, andy@su-score, ashok@su-score, bcm@su-ai, berlin@parc,
bmoore@sri-ai, brachman@sri-kl, cck@su-ai, csli-friends@sri-ai,
csli@sri-ai, dhm@su-ai, fagan@sumex, fateman%ucbkim@berkeley,
ferguson@sumex, fuzzy1@aids-unix, genesereth@sumex, georgeff@sri-ai,
grosof@su-score, hsu@su-score, jag@su-isl, jjf@su-hnv, jk@su-ai,
key.pa@parc-maxc, klc@su-ai, konolige@sri-ai, kuper@su-hnv,
laws@sri-ai, levesque@sri-kl, lgc@su-ai, lowrance@sri-ai, ma@su-ai,
marzullo.pa@parc, pednault@sri-ai, peters@sri-ai, pkr@su-ai,
restivo@su-score, riemen@su-score, rosenberg@park, rosenchein@sumex,
rperrault@sri-ai, sso.yamanouchi@su-sierra, stefan@su-score,
tu@su-score, tyson@sri-ai, vsingh@sumex, wbd.tym@office, ym@su-ai,
yom@su-ai, zaven@su-score
Due to the overwhelming response to my announcement and the need to
find a bigger room, the first meeting is postponed to Dec. 9,
10:00am.
Moshe Vardi
∂05-Nov-83 0107 LAWS@SRI-AI.ARPA AIList Digest V1 #90
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Nov 83 01:06:57 PST
Date: Friday, November 4, 1983 9:43PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #90
To: AIList@SRI-AI
AIList Digest Saturday, 5 Nov 1983 Volume 1 : Issue 90
Today's Topics:
Intelligence,
Looping Problem
----------------------------------------------------------------------
Date: Thu, 3 Nov 1983 23:46 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Inscrutable Intelligence
One potential reason to make a more precise "definition" of
intelligence is that such a definition might actually be useful
in making a program intelligent. If we could say "do that" to a
program while pointing to the definition, and if it "did that",
we would have an intelligent program. But I am far too
optimistic.
I think so. You keep repeating how good it would be to have a good
definition of intelligence and I keep saying it would be as useless as
the biologists' search for the definition of "life". Evidently
we're talking past each other so it's time to quit.
Last word: my reason for making the argument was that I have seen
absolutely no shred of good ideas in this forum, apparently because of
this definitional orientation. I admit the possibility that some
good mathematical insight could emerge from such discussions. But
I am personally sure it won't, in this particular area.
------------------------------
Date: Friday, 4 November 1983, 01:17-EST
From: jcma@MIT-MC
Subject: Inscrutable Intelligence
[Reply to Minsky.]
BOTTOM LINE: Have you heard of OPERATIONAL DEFINITIONS?
You are correct in pointing out that we need not have the ultimate definition
of intelligence. But, it certainly seems useful for the practical purposes of
investigating the phenomena of intelligence (whether natural or artificial) to
have at least an initial approximation, an operational definition.
Some people, (e.g., Winston), have proposed "people-like behavior" as their
operational definition for intelligence. Perhaps you can suggest an
incremental improvement over that rather vague definition.
If artficial intelligence can't come up with an operational definition of
intellgence, no matter how crude, it tends to undermine the credibility of the
discipline and encourage the view that AI researchers are flakey. Moreover,
it makes it very difficult to determine the degree to which a program exhibits
"intelligence."
If you were being asked to spend $millions on a field of inquiry, wouldn't you
find it strange (bordering on absurd) that the principle proponents couldn't
render an operational definition of the object of investigation?
p.s. I can't imagine that psychology has no operational definition of
intelligence (in fact, what is it?). So, if worst comes to worst, AI can just
borrow psychology's definition and improve on it.
------------------------------
Date: Fri, 4 Nov 1983 09:57 EST
From: Dan Carnese <DJC%MIT-OZ@MIT-MC.ARPA>
Subject: Inscrutable Intelligence
There's a wonderful quote from Wittgenstein that goes something like:
One of the most fundamental sources of philosophical bewilderment is to have
a substantive but be unable to find the thing that corresponds to it.
Perhaps the conclusion from all this is that AI is an unfortunate name for the
enterprise, since no clear definitions for I are available. That shouldn't
make it seem any less flakey than, say, "operations research" or "management
science" or "industrial engineering" etc. etc. People outside a research area
care little what it is called; what it has done and is likely to do is
paramount.
Trying to find the ultimate definition for field-naming terms is a wonderful,
stimulating philosophical enterprise. However, one can make an empirical
argument that this activity has little impact on technical progress.
------------------------------
Date: 4 Nov 1983 8:01-PST
From: fc%usc-cse%USC-ECL@SRI-NIC
Subject: Re: AIList Digest V1 #89
This discussion on intelligence is starting to get very boring.
I think if you want a theoretical basis, you are going to have to
forget about defining intelligence and work on a higher level. Perhaps
finding representational schemes to represent intelligence would be a
more productive line of pursuit. There are such schemes in existence.
As far as I can tell, the people in this discussion have either scorned
them, or have never seen them. Perhaps you should go to the library for
a while and look at what all the great philosophers have said about the
nature of intelligence rather than rehashing all of their arguments in
a light and incomplete manner.
Fred
------------------------------
Date: 3 Nov 83 0:46:16-PST (Thu)
From: hplabs!hp-pcd!orstcs!hakanson @ Ucb-Vax
Subject: Re: Parallelism & Consciousness - (nf)
Article-I.D.: hp-pcd.2284
No, no, no. I understood the point as meaning that the faster intelligence
is merely MORE intelligent than the slower intelligence. Who's to say that
an amoeba is not intelligent? It might be. But we certainly can agree that
most of us are more intelligent than an amoeba, probably because we are
"faster" and can react more quickly to our environment. And some super-fast
intelligent machine coming along does NOT make us UNintelligent, it just
makes it more intelligent than we are. (According to the previous view
that faster = more intelligent, which I don't necessarily subscribe to.)
Marion Hakanson {hp-pcd,teklabs}!orstcs!hakanson (Usenet)
hakanson@{oregon-state,orstcs} (CSnet)
------------------------------
Date: 31 Oct 83 13:18:58-PST (Mon)
From: decvax!duke!unc!mcnc!ecsvax!unbent @ Ucb-Vax
Subject: re: transcendental recursion [& reply]
Article-I.D.: ecsvax.1457
i'm also new on this net, but this item seemed like
a good one to get my feet wet with.
if we're going to pursue the topic of consciousness
vs intelligence, i think it's important not to get
confused about consciousness vs *self*-consciousness at
the beginning. there's a perfectly clear sense in which
any *sentient* being is "conscious"--i.e., conscious *of*
changes in its environment. but i have yet to see any
good reason for supposing that cats, rats, bats, etc.
are *self*-conscious, e.g., conscious of their own
states of consciousness. "introspective" or "self-
monitoring" capacity goes along with self-consciousness,
but i see no particular reason to suppose that it has
anything special to do with *consciousness* per se.
as long as i'm sticking my neck out, let me throw
in a cautionary note about confusing intelligence and
adaptability. cockroaches are as adaptable as all get
out, but not terribly intelligent; and we all know some
very intelligent folks who can't adapt to novelties at
all.
--jay rosenberg (escvax!unbent)
[I can't go along with the cockroach claim. They are a
successful species, but probably haven't changed much in
millions of years. Individual cockroaches are elusive,
but can they solve mazes or learn tricks? As for the
"intelligent folks": I previously stated my preference
for power tests over timed aptitude tests -- I happen to
be rather slow to change channels myself. If these people
are unable to adapt even given time, on what basis can we
say that they are intelligent? If they excel in particular
areas (e.g. idiot savants), we can qualify them as intelligent
within those specialties, just as we reduce our expectations
for symbolic algebra programs. If they reached states of
high competence through early learning, then lost the ability
to learn or adapt further, I will only grant that they >>were<<
intelligent. -- KIL]
------------------------------
Date: 3 Nov 83 0:46:00-PST (Thu)
From: hplabs!hp-pcd!orstcs!hakanson @ Ucb-Vax
Subject: Re: Semi-Summary of Halting Problem Disc [& Comment]
A couple weeks ago, I heard Marvin Minsky speak up at Seattle. Among other
things, he discussed this kind of "loop detection" in an AI program. He
mentioned that he has a paper just being published, which he calls his
"Joke Paper," which discusses the applications of humor to AI. According
to Minsky, humor will be a necessary part of any intelligent system.
If I understood correctly, he believes that there is (will be) a kind
of a "censor" which recognizes "bad situations" that the intelligent
entity has gotten itself into. This censor can then learn to recognize
the precursors of this bad situation if it starts to occur again, and
can intervene. This then is the reason why a joke isn't funny if you've
heard it before. And it is funny the first time because it's "absurd,"
the laughter being a kind of alarm mechanism.
Naturally, this doesn't really help with a particular implementation,
but I believe that I agree with the intuitions presented. It seems to
agree with the way I believe *I* think, anyway.
I hope I haven't misrepresented Minsky's ideas, and to be sure, you should
look for his paper. I don't recall him mentioning a title or publisher,
but he did say that the only reference he could find on humor was a book
by Freud, called "Jokes and the Unconscious."
(Gee, I hope his talk wasn't all a joke....)
Marion Hakanson {hp-pcd,teklabs}!orstcs!hakanson (Usenet)
hakanson@{oregon-state,orstcs} (CSnet)
[Minsky has previously mentioned this paper in AIList. You can get
a copy by writing to Minsky%MIT-OZ@MIT-MC. -- KIL]
------------------------------
Date: 31 Oct 83 7:52:43-PST (Mon)
From: hplabs!hao!seismo!ut-sally!ut-ngp!utastro!nather @ Ucb-Vax
Subject: Re: The Halting Problem
Article-I.D.: utastro.766
A common characteristic of humans that is not shared by the machines
we build and the programs we write is called "boredom." All of us get
bored running around the same loop again and again, especially if nothing
is seen to change in the process. We get bored and quit.
*---> WARNING!!! <---*
If we teach our programs to get bored, we will have solved the
infinite-looping problem, but we will lose our electronic slaves who now
work, uncomplainingly, on the same tedious jobs day in and day out. I'm
not sure it's worth the price.
Ed Nather
ihnp4!{kpno, ut-sally}!utastro!nather
------------------------------
Date: 31 Oct 83 20:03:21-PST (Mon)
From: harpo!eagle!hou5h!hou5g!hou5f!hou5e!hou5d!mat @ Ucb-Vax
Subject: Re: The Halting Problem
Article-I.D.: hou5d.725
If we teach our programs to get bored, we will have solved the
infinite-looping problem, but we will lose our electronic slaves who now
work, uncomplainingly, on the same tedious jobs day in and day out. I'm
not sure it's worth the price.
Hmm. I don't usually try to play in this league, but it seems to me that there
is a place for everything and every talent. Build one machine that gets bored
(in a controlled way, please) to work on Fermat's last Theorem. Build another
that doesn't to check tolerances on camshafts or weld hulls. This [solving
the looping problem] isn't like destroying one's virginity, you know.
Mark Terribile
Duke Of deNet
------------------------------
End of AIList Digest
********************
∂05-Nov-83 1505 KJB@SRI-AI.ARPA Committee Assignments
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Nov 83 15:05:36 PST
Date: Sat 5 Nov 83 15:00:46-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Committee Assignments
To: csli-folks@SRI-AI.ARPA
Dear CSLI Folks:
Here is the final list of committee assignments as approved in the
executive committee last Thursday. The chairperson of each committee
and subcommittee is in caps. The temporal status of the committee is
indicated, but the term of an individual to the committee has not been
determined.
If any of the committee chairpeople are not clear about the task of
the committee, please see me soon.
Thanks for all the help, future as well as past.
Jon
Computing (permanent):
Ray PERRAULT, Brian Smith, Stanley Peters, Terry Winograd,
Mabry Tyson
Building committee (permanent):
PETERS, Macken, Moore, Wasow, Kaplan, Bush
Education (permanent): PERRY, Wasow, Kay, McCarthy, Rosenschein,
Course development subcommitee (fall and winter, 83-84): KAY,
Rosenschein, Bresnan, Pollard
Approaches to human language seminar (fall 83):
Stanley PETERS, Kris Halvorsen
Approaches to computer languages seminar (fall 83):
Brian Smith, Ray Perrault, Fernando Pereira
LISP-course seminar (winter 83-84)
SMITH, des Rivieres
Semantics of Natural Languages Seminar (winter, 83-84):
BARWISE, Stucky
Anaphora Seminar (spring, 84):
BRESNAN, Cohen
Semantics of Computer Languages Seminar (spring, 84): BARWISE,
desRivieres
Computer Wizards Committee (83-84):
TYSON, Uszkoreit, Withgott, desRivieres
(for help with using the computers, especially the new ones we
expect)
Colloquium (permanent):
SAG, Pereira, Pullum (Inner)
ETCHEMENDY, Hobbs (Outer)
Postdoc and long-term visitor Committee (permanent)
MOORE, Winograd, Wasow, Barwise, Stucky, Halvorsen
Workshop Committees:
GROSZ, Sag, Ford, Shieber (long range planning)
PERRY, Almog (Kaplan workshop)
BRATMAN, Konolige (Action and reason)
STICKEL, Winograd, Smith (Constraint languages)
PEREIRA, Konolige, Smith (ML workshop)
PERRAULT, Kay, Appelt (COLING)
KARTTUNEN, Bush (Morhosyntax and Lexical Morphology)
KIPARSKY, Withgott (Lexical Phonology)
TINLunch (permanent):
ROSENSCHEIN, Appelt
Library Committee (83-84):
HOBBS, Perry, Peters
-------
∂05-Nov-83 1513 KJB@SRI-AI.ARPA Grant No. 2
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Nov 83 15:13:49 PST
Date: Sat 5 Nov 83 15:09:43-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Grant No. 2
To: csli-principals@SRI-AI.ARPA
Dear all,
You will recall that there was a second grant to cover the
cost of computers and other equipment. There has been considerable
delay in getting the grant letter, and that has caused other
difficulties, especially our not being able to proceed with awarding
the Xerox subcontracts, which had equipment that had to be paid from
that grant. Things looked rather bleak toward midweek.
Betsy has put in an incredible week, but managed to get the
letter from Charlie late Friday! We all owe her a lot for putting in
a skillful week far beyond the call of duty.
Jon
-------
∂05-Nov-83 1633 KJB@SRI-AI.ARPA Advisory Panel
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Nov 83 16:33:33 PST
Date: Sat 5 Nov 83 16:28:46-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Advisory Panel
To: csli-folks@SRI-AI.ARPA
Due to a misinterpretation of a syntactically ambiguous sentence,
Dianne thought that Winskel was on the Advisory Panel, and said so in
the newsletter. Of course he is not. She will put a correction in
the next issue. I hope no one was concerned.
-------
∂06-Nov-83 0228 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #48
Received: from SU-SCORE by SU-AI with TCP/SMTP; 6 Nov 83 02:28:01 PST
Date: Saturday, November 5, 1983 11:37PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #48
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Sunday, 6 Nov 1983 Volume 1 : Issue 48
Today's Topics:
Implementations - User Convenience Vs. Elegance & Errors,
LP Library - Updates
----------------------------------------------------------------------
Date: 3 November 1983 2210-PST (Thursday)
From: Abbott at AEROSPACE (Russ Abbott)
Subject: Purity, Again
I agree with Richard that
(1)
f←i(X1, X2, .. , Xn) =.. [f←i, X1, X2, .. , Xn].
call(p←i(X1, X2, .. , Xn) :- p←i(X1, X2, .. , Xn).
(and let's include)
current←predicate(p, p(X1, X2, .. , Xn)). % Succeeds if p/n
% is defined.
are first order definitions of =.. (univ), call, and
current←predicate for particular f←i and p←i. One could have as
many of these as one wants. (Note, current←predicate is defined
in the Edinburgh C-Prolog that I have. I'm assuming that it's a
more or less standard built-in.)
But I don't suppose one would really want to claim that
(2) (for all n)
F(X1, X2, .. , Xn) =.. [F, X1, X2, .. , Xn].
call(P(X1, X2, .. , Xn)) :-
current←predicate(P, P(X1, X2, .. , Xn)),
P(X1, X2, .. , Xn)).
current←predicate(P, P(X1, X2, .. , Xn)).
are also first order definitions ? I suppose the real question is:
when is there a difference between (1) and (2)? As Richard points
out (that David Warren points out), for a static collection of
predicates and functors, (2) can be understood as an abbreviation
for (1). But as Richard also points out, if one allows assert,
(2) gives one more than (1). For example,
assert(new←p(a)),
call(new←p(a))
will succeed with (2) but not with (1). It will not succeed with
(1) since under (1) the clause:
call(new←p(X1, X2, .. , Xn) :- new←p(X1, X2, .. , Xn).
does not exist.
Given all that, and also given that, in fact, Prolog's are built
with assert and with the (2) version of the functions discussed,
my next question is: Why don't most Prolog's allow
call(P(X1, X2, .. , Xn)).
for variable P, where call is defined as above? It certainly would
be a convenience for the user in certain situations. According to
the preceding argument it is essentially the same as allowing
assert--which they do. And in any event, if the system doesn't do
it for me, I can define my own. (Although the syntax isn't quite
so pretty.)
my←call(P, X) :-
current←predicate(P, Y),
not P = my←call, % to avoid an infinite loop.
Y,
Y =.. [P | X].
If P and X are initially uninstantiated, my←call(P, X) will
succeed with P instantiated in turn to all the predicates that
succeed with some argument(s), and with X instantiated to the
successful argument list(s).
-- Russ Abbott
------------------------------
Date: Wednesday, 2-Nov-83 18:52:49-GMT
From: Rechard HPS (on ERCC DEC-10) <OKeefe.R.A at EDXA>
Subject: Error Handling
The assert/retract question having generated more fury than
findings, I suppose it is silly of me to ask another "imperialistic"
question. Given that the only public response to my proposed design
principle for what should be an error and what should fail was an
explicit repudiation of consistency in favour of "user convenience",
notoriously ill-defined, it's not just silly, it's downright stupid.
But the question is an important one, so I boldly ask
What should Prolog do when it
detects an error?
Let's keep separate the debate over whether "X is 1+a" should
just fail (my view) or give an error message. There are some things
where we are agreed that error messages should be produced, say
"X is 1+←" (instantiation fault) or "X is 2↑300" (overflow). And a
Prolog program may detect errors of its own. Let's also keep syntax
errors for another debate, and concentrate on these run-time errors.
There is no denying that the error messages in DEC-10 Prolog are
rather unhelpful. The way that compiled code says "! mode error" and
doesn't say what predicate, and just keeps running, is a real pain.
I have put a wee bit more thought into the error messages in C-Prolog
1.4.EdAI, but not much, and the fact that several of the errors turn
tracing on is less helpful than I expected. Chris Mellish put a lot
of thought into the error messages of PDP-11 Prolog, which try to say
clearly what the problem is and what is a likely cause. The trouble
is that the real cause is often something else. But let's keep the
design of error messages separate from the actions performed when an
error is detected.
Clearly, the program cannot continue. Making all errors fail
would be defensible, as an error is one way of failing to prove a
goal, but I think that failure should be reserved for when you KNOW
that you have failed to prove the goal, not for when the machine has
turned out to be inadequate. (I repeat, I want "X is 1+a" to fail,
but if you want to treat it as an error, and print some sort of
message, then you shouldn't make it fail.)
One possibility would be to copy Lisp 1.5, and to have an
errorset. The Prolog interpreters sold by Expert Systems Ltd. have
something like this. I don't remember the details, but it is like
PrincipalGoal if←error AlternativeGoal
where PrincipalGoal is the goal you are really interested in, but if
an error occurs while proving it, the interpreter fails out to the
"if←error" goal and tries the alternative. This is quite neat. It
is easy to understand procedurally, and it gives you something very
like "recovery blocks", E.g.
sort(Raw, Ord) :-
experimental←sort(Raw, Ord)
if←error merge←sort(Raw, Ord)
if←error insertion←sort(Raw, Ord).
If an error occurs in the AlternativeGoal, it is NOT caught by the
if←error.
This isn't bad. It has deficiencies rather than problems. They
provide another predicate for picking up the name of the error (error
names are numbers, I think), so your handler can do different things
for different errors, and there is another predicate for signalling
an error. But you would in general like to control whether an error
message is printed, whether a stack trace is printed, whether the
debugger is otherwise entered, and these things have to be done when
the error is detected. By the time you arrive in the AlternativeGoal
the stack that the programmer might want to look at has gone.
The main problem with this scheme is that it is so simple that
people will be tempted to use it as a control structure, a means for
effecting non-local GOTOs.
Something which should help to prevent that would be a means of
detecting in advance whether an error is likely to occur. A good
example of that is see(File) and tell(File) when the file does not
exist. DEC-10 Prolog has a nasty little flag "nofileerorrs" (this is
an excellent example of a "feature" put in for "user convenience")
which makes these commands fail if the file doesn't exist, in the
normal state they produce an error message. There is a library
predicate exists(File) which tells if a file exists. (This is built
in in C-Prolog.) Instead of hacking the flag, you can write
load(File) :-
exists(File),
see(File),
....
seen.
load(File) :-
\+ exists(File),
writef('Sorry, can''t open %t, nothing done.\n',
[File]).
To handle input-output more generally, I suggest a new system
predicate "cant←io(Goal,Reason)" which determines why an I/O command
cannot be done, or fails if it can. I am not aware of any predicate
like this in any existing Prolog system. In my "imperialist" way, I
am asking whether the community think it as good an idea as I do, and
if there are improvements to it. The following should be recognised:
cant←io(see(File), ←) -- openable for input
cant←io(tell(File), ←) -- openable for output
cant←io(append(File), ←) -- openable for output at end
[C-Prolog v1.4.EdAI has append(File) as a command corresponding to
fopen(File, "a") in C. Its omission from DEC-10 Prolog is probably
because of the general hostility of Bottoms-10..]
cant←io(rename(Old,New), ←) -- rename/delete
cant←io(cd(Directory), ←) -- change directory
[C-Prolog v1.2B.EdAI had cd. I have written a "cd" program for
Bottoms-10 and it was amazingly painful. Its omission from DEC-10
Prolog comes as no surprise to me whatsoever.]
cant←io(save(←), ←)
cant←io(restore(←), ←)
...
You get the picture. I am not sure whether cant←io(get0(←), ←) should
exist or not, because there is already a way of detecting end of file
without using errors. Having used Pascal and PL/I, I think it easier
to write programs that check for possible problems first than to code
error handlers, however congenial the error system.
The best way of handling arithmetic overflow is to provide long
integers, so that it can't happen. DEC-10 Prolog has a long integer
(and rational) package, {SU-SCORE}PS:<PROLOG>LONG.PL . A new Prolog
implementation could quite easily do long integers in C, there is a
C library "mint" that might be usable.
I have seen another Prolog system that handled errors another
way. The idea was that a clause like
p(X...Z) :-
tests...,
cause←error(Error).
would effectively be replaced by
p(X...Z) :-
$handle←error(Error, p(X...Z))
where
$handle←error(Error, Culprit) :-
handle←error(Error, Culprit), !.
$handle←error(Error, Culprit) :-
..print message, enter debugger
This is not an accurate description, just an outline. When I first
saw it, I was greatly taken with it. "Neat!" I though. But I have
since come to see its problems.
The major problem is that there is ONE error handler for the
whole program. An instantiation error in a user query might need one
approach, while the same error (even in the same clause) in a part of
the program might need another approach. With a single error handler
you have to get around that by putting "I am doing X" into the data
base. You can guess how much I like THAT.
The fact that you can add your own clauses to the error handler
is nice. And the fact that they are called in the context of the
error is also pleasant, because you can write E.g.
handle←error(Error, Culprit) :-
writef('! Error %t involving %t.\n',
[Error, Culprit]),
break,
abort.
and let the user invoke the debugger to examine the state. There is
still a problem here, though. You may want to fail after all, and to
do that you need something like an ancestor cut. In general, digging
out the information you need to decide can be hard too. And while it
is less convenient than if←error, because it is the *program* which
decides what is to happen to errors, it still tempts programmers into
using the exception-handling mechanism as a control structure.
If you have implemented a Prolog system which has some other
method of handling errors, please tell us about it. NOW is the time
to discuss it, while Prolog is still growing and before we all end up
with incompatible InterPrologs and MacPrologs.
My suggestion is this. That whenever an error is detected, by
the Prolog system or by signal←error(...) in a Prolog, an error
message should be displayed on the terminal, and the user should be
asked what s/he wants to do. The options should include
- enter the debugger
- abort to the nearest top level
- show the stack history and ask again
- let if←error take over
and maybe more. If the program was running compiled, entering the
debugger could be hairy, but it is something we need anyway. If the
Prolog system had to fail out a couple of levels and retry the goal
using interpreted code that wouldn't be too bad. In the debugger, the
user can say what s/he wants to fail, so if s/he wants the faulted
goal to fail, that's not Prolog's responsibility. And the fact that
*every* error can be caught by the user and made to do something else
should stop programmers using error handling as a control structure.
The best candidate I know for the actual handling mechanism is Expert
Systems Ltd's "if←error".
Oh yes, before anyone shouts at me "EVIL! You MUSTN'T dictate
to the programmer what s/he does with error handlers. That's
AUTHORITARIAN, and that's the worst crime in the book!" let me point
out that in practical real-life programming, other people than the
author have to read the program, even maintain it. I'm concerned for
those people: if they see something that looks like an error handler,
then indeed it should be something that handles errors. If it turns
out that programmers really cannot live without a nonlocal goto such
as CATCH and THROW, the answer is to provide CATCH and THROW. There
is no evidence yet that people do need non-local gotos in Prolog. The
old DEC-10 Prolog compiler provided the ancestor cut (which goes part
of the way, it is certainly non-local), the current one does not, and
in four years of Prolog programming I have never missed it, though I
*have* missed nested scopes (as provided in functional languages).
The cleanest way of incorporating bizarre control structures is to
use another language such as Lisp or PopLog where they make sense.
------------------------------
Date: Sat 5 Nov 83 18:56:14-PST
From: Chuck Restivo <Restivo@SU-SCORE>
Subject: LP Library Updates
Read←Sent.Pl and Ask.Pl have been added to the utility library
at {SU-SCORE} on PS:<Prolog> . For those readers who have read
only access to the network, I have a limited number of hard
copies that could be mailed.
-- ed
Abstract for Read←Sent.Pl
% Author : R.A.O'Keefe
% Updated: 29 October 83
% Purpose: to provide a flexible input facility
% Needs : memberchk from utils.
/* read←until(Delimiters, Answer)
reads characters from the current input until a character
in the Delimiters string is read. The characters are
accumulated in the Answer string, and include the closing
delimiter. Prolog returns end-of-file as ↑Z (26)
regardless of the user's assignment (E.g. if you use ↑D as
end of file, Prolog still returns ↑Z). The end of the
file is always a delimiter.
*/
Abstract for Ask.Pl
% Author : R.A.O'Keefe
% Updated: Thursday November 3rd, 1983, 0:26:26 am
% Purpose: ask questions that have a one-character answer.
/* ask(Question, Answer)
displays the Question on the terminal and reads a
one-character answer from the terminal. But because
you normally have to type "X <CR>" to get the computer
to attend to you, it skips to the end of the line.
All the juggling with see and tell is to make sure that
i/o is done to the terminal even if your program is doing
something else. The character returned will have Ascii
code in the range 33..126 (that is, it won't be a space
or a control character).
*/
------------------------------
End of PROLOG Digest
********************
∂07-Nov-83 0228 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #49
Received: from SU-SCORE by SU-AI with TCP/SMTP; 7 Nov 83 02:27:53 PST
Date: Sunday, November 6, 1983 11:21PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #49
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Monday, 7 Nov 1983 Volume 1 : Issue 49
Today's Topics:
Implementations - An Algorithmic Capability,
& User Convenience Vs. Elegance & I/O,
Publications - Paper from Argonne
----------------------------------------------------------------------
Date: 4 November 1983 1105-PST (Friday)
From: Abbott at AEROSPACE (Russ Abbott)
Subject: Adding an Algorithmic Capability to Prolog
I'd like to suggest a notion that many readers of this Digest
may consider heresy, namely that Prolog is not the ultimate
programming language. And I'd like to open a discussion on
the following subject (if I'm not stoned for making the
suggestion): How can an algorithmic capability be included
cleanly in Prolog ?
As prolog (pardon) let me say that I've been very impressed
with the way that Prolog *is* able to accommodate certain
programming paradigms:
- concurrency (in certain senses), types (in certain
senses),
- object-oriented-ness (in certain senses), etc. I see
Prolog as in the process of developing a programming
style that suits its capabilities. I'm excited and
pleased with the process. I expect that this process
will probably go on for several years while Prolog
either annexes or rejects various modern programming
notions.
My problem is more fundamental. Prolog is not an algorithmic
language: that is supposed to be one its strengths. Yet in
programming computers it seems that one cannot avoid algorithms
--in the sense of step-by-step sequences of actions that one
wants to have occur. Even Prolog programs include algorithms.
I challenge any Prolog programmer to claim that he has never
written an algoritm in Prolog. Currently one writes algorithms
in Prolog by making use of the algorithm by which Prolog itself
attempts to satisfy goals. By riding on the coat tails of that
algorithms one can compute anything computable. But doing so
often obscures what one wants rather than clarifies it.
Algorithms appear most often in those Prolog program that include
Prolog predicates with side effects, I.e., the "impurities" in
Prolog, such as read, write, assert, etc. I claim that we will
never get rid of these impurities. It seems to me that, like
it or not:
(1) any program that is to be of much use in the world has
to have I/O, and
(2) any program that is not simply a function (and very few
programs of any real use are) has to have a way of changing
its behavior on the basis of past inputs. That is,
programs must have a "database" capability of some sort,
and that means assert or the equivalent.
Another argument for adding an algorithmic capability may be
built on the nature of Prolog as a "passive" language. In
"pure" Prolog one cannot define a "process," I.e., something
that continues to operate in time. At its heart Prolog lets
one create a static body of information that may be queried.
It does not provide means to define an ongoing process that
continues in operation indefinitely, doing whatever it is
supposed to do. Yet many programs do have that form.
The Prolog interpreter itself is of this form. Once started,
it's an ongoing process that is always interacting with the
user at the terminal. It makes no (predicate-logical) sense
to write the top level of the Prolog interpreter as a Prolog
predicate. Even if one could make the following work as an
interactive program (E.g., by making Input and Output special
variable names, for example), I'm not happy with:
interpret([Input | Inputs], [Output | Outputs]) :-
process(Input, Output),
interpret(Inputs, Outputs).
Since asserts performed for one input may affect how later inputs
are processed, this forces asserts in the middle of interpret to
affect the same call to interpret--since there is only one call
to interpret. More generally, it imples (incorrectly) that all
the inputs exist at once. It ignores the element of time completely.
Why should one have to live with this fiction ?
Nor am I happy with the other approach:
interpret :-
read(Term),
process(Term),
interpret.
or
interpret :-
repeat,
read(Term),
process(Term), % process has a cut at the end.
fail.
These are both the "coat tail" approach, using Prolog's underlying
satisfaction algorithm for a purpose for which it wasn't "intended."
In addition the first version stacks wasted return pointers for no
good reason. (I know that tail recursion can be optimized away, but
that isn't the point.) The second is even worse conceptually; it has
little if anything to do with Prolog as a logic language. Both of
these also suffer from the problem mentioned above--asserts within
a call to interpret affect later processing of the same call (since
there is only one call to interpret).
The problem may be summarized: predicate calculus is a language of
static facts; algorithms express sequences of operations in time.
Prolog implements predicate calculus; most useful programs operate
in time.
So given all that what am I asking ? I'm suggesting that Prolog
needs to be wedded cleanly to some way to express algorithms--rather
than to continue to use the coat tail approach to writing algorithms
in Prolog. I'd like to open a discussion on how best to do that.
I think that a side benefit of such a successful wedding will be
that most, if not all, of the "impurities" in current Prolog can
be moved to the algorithm side--where they will not be impurities.
The extended Prolog will thus be both cleaner and more useful.
I am now heading for my bomb shelter to await replies.
-- Russ Abbott
------------------------------
Date: Fri 4 Nov 83 09:39:25-PST
From: Pereira@SRI-AI
Subject: Referential Transparency
The argument about the referencial opaqueness of =.. is wrong: 1+2 in
Prolog DOES NOT denote 3 for the very simple reason that function
symbols in Prolog are uninterpreted. What is NOT referentially
transparent in Prolog is machine arithmetic via "is". "call" is also
referentially opaque, for similar reasons, E.g. "call" and "is" will
succeed or produce an error depending on the evaluation order. A
simile of referential transparency can be rescued from the mess by
claiming that Prolog is a PARTIAL implementation of Horn clause logic.
However, some people have observed that the Kowalski-van Emden
equation between logical and denotational semantics is less relevant
to Prolog than claimed, because a complete denotational semantics of
Prolog SHOULD deal with termination, errors, etc. Alan Mycroft has
developed an elegant account of termination which does not require a
notion of state, but it looks like such a nice account will not be
possible for the effect of "is" and "call", not to mention other,
nastier, features.
-- Fernando Pereira
PS. I have more to say on this point, but I'll keep it for
the next time for the sake of brevity.
------------------------------
Date: Fri 4 Nov 83 09:54:34-PST
From: Pereira@SRI-AI
Subject: Prolog I/O
I agree with the gist of Richard's comments on Prolog I/O, but
let me clarify the "history" of the DEC-10 Prolog I/O predicates
(I implemented them starting from suggestions from David Warren).
The main goal was to get some I/O going in the shortest possible
time so that the system's bootstrapping compiler could read
program files. In this context, most of the problems mentioned
do not occur at all. Then, because writing I/O code for TOPS-10
is very painful, nobody, including myself, could be bothered
to develop something satisfying better the needs of an interactive
environment. Finally, because writing I/O code is totally
unglamorous if not as difficult in Unix and TOPS-20, every
implementor in the "Edinburgh tradition" just copied the
existing evaluable predicates, not in the interests of compatibility
(we didn't care about that then, after all there weren't many
people using the systems), but just because it wasn't were the
fun of implementing Prolog is: designing clever data structures,
stack layouts, space saving techniques, etc.
-- Fernando Pereira
------------------------------
Date: 5-Nov-83 10:41:44-CST (Sat)
From: Overbeek@ANL-MCS (Overbeek)
Subject: Stalking The Gigalip
E. W. Lusk and I recently wrote a short note concerning attempts
to produce high-speed Prolog machines. I apologize for perhaps
restating the obvious in the introduction. In any event we
solicit comments.
Stalking the Gigalip
Ewing Lusk
Ross A. Overbeek
Mathematics and Computer Science Division
Argonne National Laboratory
Argonne, Illinois 60439
1. Introduction
The Japanese have recently established the goal of pro-
ducing a machine capable of producing between 10 million and
1 billion logical inferences per second (where a logical
inference corresponds to a Prolog procedure invocation).
The motivating belief is that logic programming unifies many
significant areas of computer science, and that expert sys-
tems based on logic programming will be the dominant appli-
cation of computers in the 1990s. A number of countries
have at least considered attempting to compete with the
Japanese in the race to attain a machine capable of such
execution rates. The United States funding agencies have
definitely indicated a strong desire to compete with the
Japanese in the creation of such a logic engine, as well as
in the competition to produce supercomputers that can
deliver at least two orders of magnitude improvement (meas-
ured in megaflops) over current machines. Our goal in writ-
ing this short note is to offer some opinions on how to go
about creating a machine that could execute a gigalip. It
is certainly true that the entire goal of creating such a
machine should be subjected to severe criticism. Indeed, we
feel that it is probably the case that a majority of people
in the AI research community feel that it offers (at best) a
misguided effort. Rather than entering this debate, we
shall concentrate solely on discussing an approach to the
goal. In our opinion a significant component of many of the
proposed responses by researchers in the United States is
based on the unstated assumption that the goal itself is not
worth pursuing, and that the benefits will accrue from addi-
tional funding to areas in AI that only minimally impinge on
the stated objective.
[ This paper is available on {SU-SCORE} as:
PS:<Prolog>ANL-LPHunting.Txt
There is a limited supply of hard copies that
can be mailed to those with read-only access
to this newsletter -ed ]
------------------------------
End of PROLOG Digest
********************
∂07-Nov-83 0920 LAWS@SRI-AI.ARPA AIList Digest V1 #91
Received: from SRI-AI by SU-AI with TCP/SMTP; 7 Nov 83 09:19:23 PST
Date: Sunday, November 6, 1983 10:51PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #91
To: AIList@SRI-AI
AIList Digest Monday, 7 Nov 1983 Volume 1 : Issue 91
Today's Topics:
Parallelism,
Turing Machines
----------------------------------------------------------------------
Date: 1 Nov 83 22:39:06-PST (Tue)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!israel @ Ucb-Vax
Subject: Re: Parallelism and Conciousness
Article-I.D.: umcp-cs.3498
[Initial portion missing. -- KIL]
a processing unit that we can currently build. If you mean 'at the
exact same time', then I defy you to show me a case where this is
necessary.
The statement "No algorithm is inherently parallel", just means that
the algoritm itself (as opposed to the engineering of putting it
into practice) does not necessarily have to be done in parallel.
Any parallel algorithm that you give me, I can write a sequential
algorithm that does the same thing.
Now, if you assume a finite number of processors for the parallel
algorithm, then the question of whether the sequential algorithm will
work under time constraints is dependent on the speed of the
processor worked on. I don't know if there has been any work
done on theoretical limits of the speed of a processor (Does
anyone know? is this a meaningful question?), but if we assume
none (a very chancy assumption at best), then any parallel algorithm
can be done sequentially in practice.
If you allow an infinite number of processors for the parallel
algorithm, then the sequential version of the algorithm can't
ever work in practice. But can the parallel version? What
do we run it on? Can you picture an infinitely parallel
computer which has robots with shovels with it, and when the
computer needs an unallocated processor and has none, then
the robots dig up the appropriate minerals and construct
the processor. Of course, it doesn't need to be said that
if the system notices that the demand for processors is
faster than the robots' processor production output, then
the robots make more robots to help them with the raw materials
gathering and the construction. :-)
--
↑-↑ Bruce ↑-↑
University of Maryland, Computer Science
{rlgvax,seismo}!umcp-cs!israel (Usenet) israel.umcp-cs@CSNet-Relay (Arpanet)
------------------------------
Date: 31 Oct 83 19:55:44-PST (Mon)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: Parallelism and Conciousness - (nf)
Article-I.D.: uiucdcs.3572
I see no reason why consciousness should be inherently parallel. But
it turns out that the only examples of conscious entities (i.e. those
which nearly everyone agrees are conscious) rely heavily on parallelism
at several levels. This is NOT to say that they derive their
consciousness from parallelism, only that there is a high corelation
between the two.
There are good reasons why natural selection would favor parallelism.
Besides the usually cited ones (e.g. speed, simplicity) is the fact
that the world goes by very quickly, and carries a high information
content. That makes it desirable and advantageous for a conscious
entity to be aware of several things at once. This strongly suggests
parallelism (although a truly original species might get away with
timesharing).
Pushing in the other direction, I should note that it is not necessary
to bring the full power of the human intellect to bear against ALL of
our environment at once. Hence the phenomenon of attention. It
suffices to have weaker processes in charge of uninteresting phenomena
in the environment, as long as these have the ability to enlist more of
the organism's information processing power when the situation becomes
interesting enough to demand it. (This too could be finessed with a
clever timesharing scheme, but I know of no animal that does it that
way.)
Once again, none of this entails a connection causal connection between
parallelism and consciousness. It just seems to have worked out that
nature liked it that way (in the possible world in which we live).
Rick Dinitz
...!uiucdcs!uicsl!dinitz
------------------------------
Date: 1 Nov 83 11:53:58-PST (Tue)
From: hplabs!hao!seismo!rochester!blenko @ Ucb-Vax
Subject: Re: Parallelism & Consciousness
Article-I.D.: rocheste.3648
Interesting to see this discussion taking place among people
(apparently) committed to an information-processing model for
intelligence.
I would be satisfied with the discovery of mechanisms that duplicate
the information-processing functions associated with intelligence.
The issue of real-time performance seems to be independent of
functional performance (not from an engineering point of view, of
course; ever tell one of your hardware friends to "just turn up the
clock"?). The fact that evolutionary processes act on both the
information-processing and performance characteristics of a system may
argue for the (evolutionary) superiority of one mechanism over another;
it does not provide prescriptive information for developing functional
mechanisms, however, which is the task we are currently faced with.
Tom
------------------------------
Date: 1 Nov 83 19:01:59-PST (Tue)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!speaker @ Ucb-Vax
Subject: Re: Parallelism and Conciousness
Article-I.D.: umcp-cs.3523
No algorithm is inherently parallel.
The algorithms you are thinking about occur in the serial world of
the Turing machine. Turing machines, remember, have only only one
input. Consider what happens to your general purpose turing machine
when it must compute on more than one input and simultaneously!
So existence in the real world may require parallelism.
How do you define simultaneously? If you mean within a very short
period of time, then that requirement is based on the maximum speed of
a processing unit that we can currently build. If you mean 'at the
exact same time', then I defy you to show me a case where this is
necessary.
A CHALLENGE!!! Grrrrrrrr......
Okay, let's say we have two discrete inputs that must
be monitored by a Turing machine. Signals may come in
over these inputs simultaneously. How do you propose
to monitor both discretes at the same time? You can't
monitor them as one input because your Turing machine
is allowed only one state at a time on its read/write head.
Remember that the states of the inputs run as fast as
those of the Turing machine.
You can solve this problem by building two Turing machines,
each of which may look at the discretes.
I don't have to appeal to practical speeds of processors.
We're talking pure theory here.
--
- Speaker-To-Stuffed-Animals
speaker@umcp-cs
speaker.umcp-cs@CSnet-Relay
------------------------------
Date: 1 Nov 83 18:41:10-PST (Tue)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!speaker @ Ucb-Vax
Subject: Infinite loops and Turing machines...
Article-I.D.: umcp-cs.3521
One of the things I did in my undergrad theory class was to prove that
a multiple-tape Turing machine is equivalent to one with a single tape
(several tapes were very handy for programming). Also, we showed that
a TM with a 2-dimensional tape infinite in both x and y was also
equivalent to a single-tape TM. On the other hand, the question of
a machine with an infinite number of read heads was left open...
Aha! I knew someone would come up with this one!
Consider that when we talk of simultaneous events... we speak of
simultaneous events that occur within one Turing machine state
and outside of the Turing machine itself. Can a one-tape
Turing machine read the input of 7 discrete sources at once?
A 7 tape machine with 7 heads could!
The reason that they are not equivelent is that we have
allowed for external states (events) outside of the machine
states of the Turing machine itself.
--
- Speaker-To-Stuffed-Animals
speaker@umcp-cs
speaker.umcp-cs@CSnet-Relay
------------------------------
Date: 1 Nov 83 16:56:19-PST (Tue)
From: hplabs!hao!seismo!philabs!linus!security!genrad!mit-eddie!rlh @
Ucb-Vax
Subject: Re: Parallelism and Conciousness
Article-I.D.: mit-eddi.885
requirement is based on the maximum speed of
a processing unit that we can currently build. If you mean 'at the
exact same time', then I defy you to show me a case where this is
necessary.
The statement "No algorithm is inherently parallel", just means that
the algorithm itself (as opposed to the engineering of putting it
into practice) does not necessarily have to be done in parallel.
Any parallel algorithm that you give me, I can write a sequential
algorithm that does the same thing.
Consider the retina, and its processing algorithm. It is certainly
true that once the raw information has been collected and in some way
band-limited, it can be processed in either fashion; but one part of
the algorithm must necessarily be implemented in parallel. To get
the photon efficiencies that are needed for dark-adapted vision
(part of the specifications for the algorithm) one must have some
continuous, distributed attention to the light field. If I match
the spatial and temporal resolution of the retina, call it several thousand
by several thousand by some milliseconds, by sequentially scanning with
a single receptor, I can only catch one in several-squared million
photons, not the order of one in ten that our own retina achieves.
------------------------------
Date: 2 Nov 83 19:44:21-PST (Wed)
From: pur-ee!uiucdcs!uicsl!preece @ Ucb-Vax
Subject: Re: Parallelism and Conciousness - (nf)
Article-I.D.: uiucdcs.3633
There is a significant difference between saying "No algorithm is
inherently parallel" and saying "Any algorithm can be carried out
without parallelism." There are many algorithms that are
inherently parallel. Many (perhaps all) of them can be SIMULATED
without true parallel processing.
I would, however, support the contention that computational models
of natural processes need not follow the same implementations, and
that a serial simulation of a parallel process can produce the
same result.
scott preece
ihnp4!uiucdcs!uicsl!preece
------------------------------
Date: 2 Nov 83 15:22:20-PST (Wed)
From: hplabs!hao!seismo!philabs!linus!security!genrad!grkermit!masscom
p!kobold!tjt @ Ucb-Vax
Subject: Re: Parallelism and Conciousness
Article-I.D.: kobold.191
Gawd!! Real-time processing with a Turing machine?!
Pure theory indeed!
Turing machines are models for *abstract* computation. You get to
write an initial string on the tape(s) and start up the machine: it
does not monitor external inputs changing asynchronously. You can
define your *own* machine which is just like a Turing machine, except
that it *does* monitor external inputs changing asynchronously (Speaker
machines anyone :-).
Also, if you want to talk *pure theory*, I could just enlarge my input
alphabet on a single input to encode all possible simultaneous values
at multiple inputs.
--
Tom Teixeira, Massachusetts Computer Corporation. Littleton MA
...!{harpo,decvax,ucbcad,tektronix}!masscomp!tjt (617) 486-9581
------------------------------
Date: 2 Nov 83 16:28:10-PST (Wed)
From: hplabs!hao!seismo!philabs!linus!security!genrad!grkermit!masscom
p!kobold!tjt @ Ucb-Vax
Subject: Re: Parallelism and Conciousness
Article-I.D.: kobold.192
In regards to the statement
No algorithm is inherently parallel.
which has been justified by the ability to execute any "parallel"
program on a single sequential processor.
The difference between parallel and sequential algorithms is one of
*expressive* power rather than *computational* power. After all, if
it's just computational power you want, why aren't you all programming
Turing machines?
The real question is what is the additional *expressive* power of
parallel programs. The additional expressive power of parallel
programming languages is a result of not requiring the programmer to
serialize steps of his computation when he is uncertain whether either
one will terminate.
--
Tom Teixeira, Massachusetts Computer Corporation. Littleton MA
...!{harpo,decvax,ucbcad,tektronix}!masscomp!tjt (617) 486-9581
------------------------------
Date: 4 Nov 83 8:13:22-PST (Fri)
From: hplabs!hao!seismo!ut-sally!ut-ngp!utastro!nather @ Ucb-Vax
Subject: Our Parallel Eyeballs
Article-I.D.: utastro.784
Consider the retina, and its processing algorithm. [...]
There seems to be a misconception here. It's not clear to me that "parallel
processing" includes simple signal accumulation. Astronomers use area
detectors that simply accumulate the charge deposited by photons arriving
on an array of photosensitive diodes; after the needed "exposure" the charge
image is read out (sequentially) for display, further processing, etc.
If the light level is high, readout can be repeated every few milliseconds,
or, in some devices, proceed continuously, allowing each pixel to accumulate
photons between readouts, which reset the charge to zero.
I note in passing that we tend to think sequentially (our self-awareness
center seems to be serial) but operate in parallel (our heart beats along,
and body chemistry gets its signals even when we're chewing gum). We
have, for the most part, built computers in our own (self)image: serial.
We're encountering real physical limits in serial computing (the finite
speed of light) and clearly must turn to parallel operations to go much
faster. How we learn to "think in parallel" is not clear, but people
who do the logic design of computers try to get as many operations into
one clock cycle as possible, and maybe that's the place to start.
Ed Nather
ihnp4!{ut-sally,kpno}!utastro!nather
------------------------------
Date: 3 Nov 83 9:39:07-PST (Thu)
From: decvax!microsoft!uw-beaver!ubc-visi!majka @ Ucb-Vax
Subject: Get off the Turing Machines
Article-I.D.: ubc-visi.513
From: Marc Majka <majka@ubc-vision.UUCP>
A Turing machine is a theoretical model of computation.
<speaker.umcp-cs@CSnet-Relay> points out that all this noise about
"simultaneous events" is OUTSIDE of the notion of a Turing machine. Turing
machines are a theoretical formulation which gives theoreticians a formal
system in which to consider problems in computability, decidability, the
"hardness" of classes of functions, and etc. They don't really care whether
set membership in a class 0 grammer is decidable in less than 14.2 seconds.
The unit of time is the state transition, or "move" (as Turing called it).
If you want to discuss time (in seconds or meters), you are free to invent a
new model of computation which includes that element. You are then free to
prove theorems about it and attempt to prove it equivalent to other models
of computation. Please do this FORMALLY and post (or publish) your results.
Otherwise, invoking Turing machines is a silly and meaningless exercise.
Marc Majka
------------------------------
Date: 3 Nov 83 19:47:04-PST (Thu)
From: pur-ee!uiucdcs!uicsl!preece @ Ucb-Vax
Subject: Re: Parallelism and Conciousness - (nf)
Article-I.D.: uiucdcs.3677
Arguments based on speed of processing aren't acceptable. The
question of whether parallel processing is required has to be
in the context of arbitrarily fast processors. Thus you can't
talk about simultaneous inputs changing state at processor speed
(unless you're considering the interesting case where the input
is directly monitoring the processor itself and therefore
intrinsically as fast as the processor; in that case you can't
cope, but I'm not sure it's an interesting case with respect to
consciousness).
Consideration of the retina, on the other hand, brings up the
basic question of what is a parallel processor. Is an input
latch (allowing delayed polling) or a multi-input averager a
parallel process or just part of the plumbing? We can also, of
course, group the input bits and assume an arbitrarily fast
processor dealing with the bits 64 (or 128 or 1 million) at a
time.
I don't think I'd be willing to say that intelligence or
consciousness can't be slow. On the other hand, I don't think
there's too much point to this argument, since it's pretty clear
that producing a given level of performance will be easier with
parallel processing.
scott preece
ihnp4!uiucdcs!uicsl!preece
------------------------------
End of AIList Digest
********************
∂07-Nov-83 1027 KONOLIGE@SRI-AI.ARPA Dissertation
Received: from SRI-AI by SU-AI with TCP/SMTP; 7 Nov 83 10:27:25 PST
Date: Mon 7 Nov 83 10:16:28-PST
From: Kurt Konolige <Konolige@SRI-AI.ARPA>
Subject: Dissertation
To: aic-staff@SRI-AI.ARPA, csli-friends@SRI-AI.ARPA, mclaughlin@SUMEX-AIM.ARPA
The defense of my thesis, ``A Deduction Model of Belief,'' is
scheduled for Tuesday Nov. 15 at 2:30pm in MJH252. Anyone who wants a
copy of the first draft before that time can pick one up in my office at
SRI (EJ272, just inside the door on the metal bookshelf); or send me an
address and I'll mail a copy.
--kk
-------
∂07-Nov-83 1030 EMMA@SRI-AI.ARPA recycling
Received: from SRI-AI by SU-AI with TCP/SMTP; 7 Nov 83 10:30:29 PST
Date: Mon 7 Nov 83 10:27:38-PST
From: EMMA@SRI-AI.ARPA
Subject: recycling
To: csli-folks@SRI-AI.ARPA
There is now a recycling barrel beside the coke machine in
Ventura. Please use it for paper recycling only, no glossy paper
and no cans.
-------
∂07-Nov-83 1033 @SU-SCORE.ARPA:EENGELMORE@SUMEX-AIM.ARPA Request from China
Received: from SU-SCORE by SU-AI with TCP/SMTP; 7 Nov 83 10:33:13 PST
Received: from SUMEX-AIM.ARPA by SU-SCORE.ARPA with TCP; Mon 7 Nov 83 10:31:25-PST
Date: Mon 7 Nov 83 10:25:57-PST
From: Ellie Engelmore <EENGELMORE@SUMEX-AIM.ARPA>
Subject: Request from China
To: faculty@SU-SCORE.ARPA
cc: EENGELMORE@SUMEX-AIM.ARPA
Dr. Feigenbaum has received a letter from Mr. Chen Liangkuana, a member
of the faculty of the East China Engineering Institute, People's
Republic of China. Mr. Liangkuana is seeking Visiting Scholar status
at the Stanford Department of Computer Science for two years beginning
at the end of 1983 or the beginning of next year. All his expenses
will be paid by the Chinese government.
He has taken part in the design of computers and is interested in
working on distributed processing and computing systems. If you are
interested in more information about him (resume and letters of
recommendation), please let me know.
-------
∂07-Nov-83 1507 LAWS@SRI-AI.ARPA AIList Digest V1 #92
Received: from SRI-AI by SU-AI with TCP/SMTP; 7 Nov 83 15:06:39 PST
Date: Sunday, November 6, 1983 11:06PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #92
To: AIList@SRI-AI
AIList Digest Monday, 7 Nov 1983 Volume 1 : Issue 92
Today's Topics:
Halting Problem,
Metaphysics,
Intelligence
----------------------------------------------------------------------
Date: 31 Oct 83 19:13:28-PST (Mon)
From: harpo!floyd!clyde!akgua!psuvax!simon @ Ucb-Vax
Subject: Re: Semi-Summary of Halting Problem Discussion
Article-I.D.: psuvax.335
About halting:
it is unclear what is meant precisely by "can a program of length n
decide whether programs of length <= n will halt". First, the input
to the smaller programs is not specified in the question. Assuming
that it is a unique input for each program, known a priori (for
example, the index of the program), then the answer is obviously YES
for the following restriction: the deciding program has size 2**n and
decides on smaller programs (there are a few constants that are
neglected too). There are less than 2*2**n programs of length <=n. For
each represent halting on the specific input the test is to apply to
by 1, looping by 0. The resulting string is essentially the program
needed - it clearly exists. Getting hold of it is another matter - it
is also obvious that this cannot be done in a uniform manner for every
n because of the halting problem. At the cost of more sophisticated
coding, and tremendous expenditure of time, a similar construction can
be made to work for programs of length O(n).
If the input is not fixed, the question is obviously hopeless - there are
very small universal programs.
As a practical matter it is not the halting proble that is relevant, but its
subrecursive analogues.
janos simon
------------------------------
Date: 3 Nov 83 13:03:22-PST (Thu)
From: harpo!eagle!mhuxl!mhuxm!pyuxi!pyuxss!aaw @ Ucb-Vax
Subject: Re: Halting Problem Discussion
Article-I.D.: pyuxss.195
A point missing in this discussion is that the halting problem is
equivalent to the question:
Can a method be formulated to attempt to solve ANY problem
which can determine if it is not getting closer to the
solution
so the meta-halters (not the clothing) can't be more than disguised
time limits etc. for the general problem, since they CAN NOT MAKE
INFERENCES ABOUT THE PROCESS they are to halt
Aaron Werman pyuxi!pyuxss!aaw
------------------------------
Date: 9 Nov 83 21:05:28-EST (Wed)
From: pur-ee!uiucdcs!uokvax!andree @ Ucb-Vax
Subject: Re: re: awareness - (nf)
Article-I.D.: uiucdcs.3586
Robert -
If I understand correctly, your reasons for preferring dualism (or
physicalism) to functionalism are:
1) It seems more intuitively obvious.
2) You are worried about legal/ethical implications of functionalism.
I find that somewhat amusing, as those are EXACTLY my reasons for
prefering functionalism to either dualism or physicalism. The legal
implications of differentiating between groups by arbitrarily denying
`souls' to one is well-known; it usually leads to slavery.
<mike
------------------------------
Date: Saturday, 5 November 1983, 03:03-EST
From: JCMA@MIT-AI
Subject: Inscrutable Intelligence
From: Dan Carnese <DJC%MIT-OZ@MIT-MC.ARPA>
Trying to find the ultimate definition for field-naming terms is a
wonderful, stimulating philosophical enterprise.
I think you missed the point all together. The idea is that *OPERATIONAL
DEFINITIONS* are known to be useful and are found in all mature disciplines
(e.g., physics). The fact that AI doesn't have an operation definition of
intelligence simply points up the fact that the field of inquiry is not yet a
discipline. It is a proto-discipline precisely because key issues remain
vague and undefined and because there is no paradigm (in the Khunian sense of
the term, not popular vulgarizations).
That means that it is not possible to specify criteria for certification in
the field, not to mention the requisite curriculum for the field. This all
means that there is lots of work to be done before AI can enter the normal
science phase.
However, one can make an empirical argument that this activity has little
impact on technical progress.
Let's see your empirical argument. I haven't noticed any intelligent machines
running around the AI lab lately. I certainly haven't noticed any that can
carry on any sort of reasonable conversation. Have you? So, where is all
this technical progress regarding understanding intelligence?
Make sure you don't fall into the trap of thinking that intelligent machines
are here today (Douglas Hofstadter debunks this position in his "Artificial
Intelligence: Subcognition as Computation," CS Dept., Indiana U., Nov. 1982).
------------------------------
Date: 5 November 1983 15:38 EST
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Turing test in everyday life
Have you ever gotten one of those phone calls from people who are trying
to sell you a magazine subscription? Those people sound *awfully* like
computers! They have a canned speech, with canned places to wait for
human (customer) response, and they seem to have a canned answer to
anything you say. They are also *boring*!
I know the entity at the other end of the line is not a computer
(because they recognize my voice -- someone correct me if this is not a
good test) but we might ask: how good would a computer program have to
be to fool someone into thinking that it is human, in this limited case?
I suspect you wouldn't have to do much, since the customer doesn't
expect much from the salescreature who phones. Perhaps there is a
lesson here.
-- Steve
[There is a system, in use, that can recognize affirmative and negative
replies to its questions. It also stores a recording of your responses
and can play the recording back to you before ending the conversation.
The system is used for selling (e.g., record albums) and for dunning,
and is effective partly because it is perceived as "mechanical". People
listen to it because of the novelty, it can be programmed to make negative
responses very difficult, and the playback of your own replies is very
effective. -- KIL]
------------------------------
Date: 1 Nov 83 13:41:53-PST (Tue)
From: hplabs!hao!seismo!uwvax!reid @ Ucb-Vax
Subject: Slow Intelligence
Article-I.D.: uwvax.1129
When people's intelligence is evaluated, at least subjectively, it is common
to hear such things as "He is brilliant but never applies himself," or "She
is very intelligent, but can never seem to get anything accomplished due to
her short attention span." This seems to imply to me that intelligence is
sort of like voltage--it is potential. Another analogy might be a
weight-lifter, in the sense that no one doubts her
ability to do amazing physical things, based on her appearance, but she needn't
prove it on a regular basis.... I'm not at all sure that people's working
definition of intelligence has anything at all to do with either time or sur-
vival.
Glenn Reid
..seismo!uwvax!reid (reid@uwisc.ARPA)
------------------------------
Date: 2 Nov 83 8:08:19-PST (Wed)
From: harpo!eagle!mhuxl!ulysses!unc!mcnc!ecsvax!unbent @ Ucb-Vax
Subject: intelligence and adaptability
Article-I.D.: ecsvax.1466
Just two quick remarks from a philosopher:
1. It ain't just what you do; it's how you do it.
Chameleons *adapt* to changing environments very quickly--in a way
that furthers their goal of eating lots of flies. But what they're doing
isn't manifesting *intelligence*.
2. There's adapting and adapting. I would have thought that
one of the best evidences of *our* intelligence is not our ability to
adapt to new environments, but rather our ability to adapt new
environments to *us*. We don't change when our environment changes.
We build little portable environments which suit *us* (houses,
spaceships), and take them along.
------------------------------
Date: 3 Nov 83 7:51:42-PST (Thu)
From: decvax!tektronix!ucbcad!notes @ Ucb-Vax
Subject: What about physical identity? - (nf)
Article-I.D.: ucbcad.645
It's surprising to me that people are still speaking in terms of
machine intelligence unconnected with a notion of a physical host that
must interact with the real world. This is treated as a trivial problem
at most (I think Ken Laws said that one could attach any kind of sensing
device, and hence (??) set any kind of goal for a machine). So why does
Hubert Dreyfus treat this problem as one whose solution is a *necessary*,
though not sufficient, condition for machine intelligence?
But is it a solved problem? I don't think so--nowhere near, from
what I can tell. Nor is it getting the attention it requires for solution.
How many robots have been built that can infer their own physical limits
and capabilities?
My favorite example is the oft-quoted SHRDLU conversation; the
following exchange has passed for years without comment:
-> Put the block on top of the pyramid
-> I can't.
-> Why not?
-> I don't know.
(That's not verbatim.) Note that in human babies, fear of falling seems to
be hardwired. It will still attempt, when old enough, to do things like
put a block on top of a pyramid--but it certainly doesn't seem to need an
explanation for why it should not bother after the first few tries. (And
at that age, it couldn't understand the explanation anyway!)
SHRDLU would have to be taken down, and given another "rule".
SHRDLU had no sense of what it is to fall down. It had an arm, and an
eye, but only a rather contrived "sense" of its own physical identity.
It is this sense that Dreyfus sees as necessary.
---
Michael Turner (ucbvax!ucbesvax.turner)
------------------------------
Date: 4 Nov 83 5:57:48-PST (Fri)
From: ihnp4!ihuxn!ruffwork @ Ucb-Vax
Subject: RE:intelligence and adaptability
Article-I.D.: ihuxn.400
I would tend to agree that it's not how a being adapts to its
environment, but how it changes the local environment to better
suit itself.
Also, I would have to say that adapting the environment
would only aid in ranking the intelligence of a being if that
action was a voluntary decision. There are many instances
of creatures that alter their surroundings (water spiders come
to mind), but could they decide not to ??? I doubt it.
...!iham1!ruffwork
------------------------------
Date: 4 Nov 83 15:36:33-PST (Fri)
From: harpo!eagle!hou5h!hou5a!hou5d!mat @ Ucb-Vax
Subject: Re: RE:intelligence and adaptability
Article-I.D.: hou5d.732
Man is the toolmaker and the principle tooluser of all the living things
that we know of. What does this mean?
Consider driving a car or skating. When I do this, I have managed to
incorporate an external system into my own control system with its myriad
of pathways both forward and backward.
This takes place at a level below that which usually is considered to
constitute intelligent thought. On the other hand, we can adopt external
things into our thought-model of the world in a way which no other creature
seems to be capable of.
Is there any causal relationship here?
Mark Terribile
DOdN
------------------------------
Date: 6 Nov 1983 20:54-PST
From: fc%usc-cse%USC-ECL@SRI-NIC
Subject: Re: AIList Digest V1 #90
Irwin Marin's course in AI started out by asking us to define
the term 'Natural Stupidity'. I guess artificial intelligence must be
anything both unnatural and unstupid. We had a few naturally stupid
examples to work with, so we got a definition quite quickly. Naturally
stupid types were unable to adapt, unable to find new representations,
and made of flesh and bone. Artificially intelligent types were
machines designed to adapt their responses and seek out more accurate
representations of their environment and themselves. Perhaps this would
be a good 'working' definition. At any rate, definitions are only
'working' if you work with them. If you can work with this one I
suggest you go to it and stop playing with definitions.
FC
------------------------------
End of AIList Digest
********************
∂07-Nov-83 1512 JF@SU-SCORE.ARPA meeting, november 21 at stanford
Received: from SU-SCORE by SU-AI with TCP/SMTP; 7 Nov 83 15:11:56 PST
Date: Mon 7 Nov 83 15:04:56-PST
From: Joan Feigenbaum <JF@SU-SCORE.ARPA>
Subject: meeting, november 21 at stanford
To: Bay-Area-Theorists: ;
The next meeting of the Bay Area Theory Seminar will be held on Monday
(note change from the usual Friday), November 21, 1983 at Stanford
University. The exact location and times are
CERAS LGI, Rm 112, 10:00 a.m. to 5:00 p.m.
I will send campus maps with CERAS circled to the BATS coordinator at your
location, and I will try to find out whatever I can about parking (but I'm
sure it won't be good news--try to car pool!). The speakers will be:
Andrey Goldberg, Berkeley
Abstract:
The knapsack problem is as follows: Given a bound B and a set of n objects
with values ci and sizes ai, find a subset of the highest value among subsets
of total size no greater than B. The relaxed knapsack problem is the same,
except we allowed to "cut" (i.e take fractional parts of) objects. We assume
that the points (ai, ci) are distributed in the unit square according to a
Poisson process with mean N. Also, we assume that B = beta * N, for some
constant beta.
We construct a polynomial time probabilistic approximation scheme, i.e. we will
show that for every epsilon in [0, 1] there is a polynomial time algorithm that
finds the exact solution to the problem with probability greater or equal to
1 - epsilon (the algorithm we present is not a coin-flipping algorithm; its
success depends on the input).
We also prove that under the above probabilistic model, the expected difference
between solutions to relaxed and 0-1 problems is THETA [(log↑2 n)/n].
In addition we consider the number of places in which optimal relaxed and 0-1
solutions differ, and show the relation between this random variable and the
difference between values of relaxed and 0-1 problems.
Allen Goldberg, UCSC
Abstract:
Loop Fusion is an important optimization technique applica-
ble to languages, such as database query languages, func-
tional languages and set-theoretic languages, that specify
computations directly on composite-valued objects. Loop
fusion is applied when translating composite operations to
element-at-time operations. It results in significant but
linear time improvements and asymptotic space improvements
by combining loops that are made explicit by the transla-
tion. A graph-theoretic formalization of the problem of
finding an optimal loop fusion schedule is stated and used
to show the problem NP-complete. NP-completeness results or
efficient algorithms are given for restricted versions of
the problem.
Nick Pippenger, IBM
M. Ajtai, J. Komlos and E. Szemeredi have recently shown
that comparators can be assembled to create
sorting networks of depth O(log n) and therefore of size O(n log n).
The proof of this marvelous result is of such intricacy
that it does not now seem feasible to optimize the
processes involved and obtain reasonable values for
the constants implicit in the O(...) notation.
Our object in this talk is to consider the simpler
"selection networks", which merely classify n elements
into a larger half and a smaller half, each comprising n/2 elements.
Bounds of O(n log n) for the size, or of O(log n)
for the depth, of selection networks were not known
until they emerged as corollaries of the
Ajtai-Komlos-Szemeredi theorem mentioned above.
We shall show that applying the ideas of
Ajtai, Komlos and Szemeredi to the more modest goal of constructing
selection networks yields a situation in which
standard techniques for the analysis of algorithms
(recurrences, generating functions, etc.)
can be brought to bear and in which good estimates for the
constants can be derived.
Gabriel Kuper, Stanford
(abstract available soon)
There is a possibility that we will also have a speaker from PARC. Contact
your local coordinator if you have any questions and if he or she can't help
you, contact me. If you received two copies of this message (or if you are
reading someone else's copy and did not receive your own and would like to),
contact me directly. The local coordinators are:
klawe.ibm-sj@rand-relay
avi%ucbernie@berkeley
guibas.pa@parc
manfred.ucsc@rand-relay
jf@su-score
See you on November 21,
Joan Feigenbaum
(jf@su-score)
_
-------
∂07-Nov-83 1744 JF@SU-SCORE.ARPA mailing list
Received: from SU-SCORE by SU-AI with TCP/SMTP; 7 Nov 83 17:43:59 PST
Date: Mon 7 Nov 83 17:41:03-PST
From: Joan Feigenbaum <JF@SU-SCORE.ARPA>
Subject: mailing list
To: bats@SU-SCORE.ARPA
you can now send messages to everyone on this distribution list by mailing to
BATS@su-score
joan feigenbaum
-------
∂07-Nov-83 1831 ALMOG@SRI-AI.ARPA reminder on why context wont go away
Received: from SRI-AI by SU-AI with TCP/SMTP; 7 Nov 83 18:31:32 PST
Date: 7 Nov 1983 1827-PST
From: Almog at SRI-AI
Subject: reminder on why context wont go away
To: csli-friends at SRI-AI
cc: almog, peters, grosz
Tomorrow, Tuesday 11.8.83, we have our sixth meeting. The speaker
is Stanley Peters from CSLI and Stanford University. Next week
J.Hobbs from SRI will be giving a talk.
Attached is the abstract of S.Peters' talk (n.b. meetings are
in Ventura Hall, 3.15 pm)
LOGICAL FORM AND CONTEXT
S.PETERS, CSLI
Even linguists who have believed in the existence of a logical
form of language have come to recognize that certain aspects of
meaning are best dealt with in terms of what contexts sentences
can be used in.
A case study is the linguistic analysis of presupposition. In
the late '60s, linguists were analyzing presuppositions in
context-independent semantic terms -- e.g., using truth-value
gaps. Then they came to see that use-related features of the
phenomena they were dealing with called for a more pragmatic
treatment. Eventually, Karttunen proposed an analysis that dealt
with presupposition in nonsemantic, purely context-dependent
terms.
I will recount these developments, and try to give an indication
of what has been meant by "context" in such linguistic work, as
well as of some mechanisms linguists have employed to relate
sentences to appropriate contexts.
-------
-------
∂07-Nov-83 2011 LAWS@SRI-AI.ARPA AIList Digest V1 #93
Received: from SRI-AI by SU-AI with TCP/SMTP; 7 Nov 83 20:11:00 PST
Date: Monday, November 7, 1983 1:11PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #93
To: AIList@SRI-AI
AIList Digest Tuesday, 8 Nov 1983 Volume 1 : Issue 93
Today's Topics:
Implementations - Lisp for MV8000,
Expert Systems - Troubleshooting & Switching Systems,
Alert - IEEE Spectrum,
Fifth Generation - Stalking The Gigalip,
Intelligence - Theoretical Speed,
Humor - Freud Reference,
Metadiscussion - Wittgenstein Quote,
Seminars - Knowledge Representation & Logic Programming,
Conferences - AAAI-84 Call for Papers
----------------------------------------------------------------------
Date: Tue, 1 Nov 83 16:51:42 EST
From: Michael Fischer <Fischer@YALE.ARPA>
Subject: Lisp for MV8000
The University of New Haven is looking for any version of Lisp that
runs on a Data General MV8000, or for a portable Lisp written in Fortran
or Pascal that could be brought up in a short time.
Please reply to me by electronic mail and I will bring it to their
attention, or contact Alice Fischer directly at (203) 932-7069.
-- Michael Fischer <Fischer@YALE.ARPA>
------------------------------
Date: 5 Nov 83 21:31:57-EST (Sat)
From: decvax!microsoft!uw-beaver!tektronix!tekig1!sal @ Ucb-Vax
Subject: Expert systems for troubleshooting
Article-I.D.: tekig1.1442
I am in the process of evaluating the feasibility of developing expert
systems for troubleshooting instruments and functionally complete
circuit boards. If anyone has had any experience in this field or has
seen a similar system, please get in touch with me either through the
net or call me at 503-627-3678 during 8:00am - 6:00pm PST. Thanks.
Salahuddin Faruqui
Tektronix, Inc.
Beaverton, OR 97007.
------------------------------
Date: 4 Nov 83 17:20:42-PST (Fri)
From: ihnp4!ihuxl!pvp @ Ucb-Vax
Subject: Looking for a rules based expert system.
Article-I.D.: ihuxl.707
I am interested in obtaining a working version of a rule based
expert system, something on the order of RITA, ROSIE, or EMYCIN.
I am interested in the knowledge and inference control structure,
not an actual knowledge base. The application would be in the
area of switching system maintenance and operation.
I am in the 5ESS(tm) project, and so prefer a Unix based product,
but I would be willing to convert a different type if necessary.
An internal BTL product would be desirable, but if anyone knows
about a commercially available system, I would be interested in
evaluating it.
Thanks in advance for your help.
Philip Polli
BTL Naperville
IX 1F-474
(312) 979-0834
ihuxl!pvp
------------------------------
Date: Mon 7 Nov 83 09:50:29-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: IEEE Spectrum Alert
The November issue of IEEE Spectrum is devoted to the 5th Generation.
In addition to the main survey (which includes some very detailed tables
about sources of funding), there are:
A review of Feigenbaum and McCorduck's book, by Mark Stefik.
A glossary (p. 39) of about 25 AI and CS terms, taken from
Gevarter's Overview of AI and Robotics for NASA.
Announcement (p. 126) of The Artificial Intelligence Report, a
newsletter for people interested in AI but not engaged in research.
It will begin in January; no price is given. Contact Artificial
Intelligence Publications, 95 First St., Los Altos, CA 94022,
(415) 949-2324.
Announcement (p. 126) of a tour of Japan for those interested in
the 5th Generation effort.
Brief discussion (p. 126) of Art and Computers: The First Artificial-
Intelligence Coloring Book, a set of line drawings by an artist-taught
rule-based system.
An interesting parable (p. 12) for those who would educate the public
about AI or any other topic.
-- Ken Laws
------------------------------
Date: 5-Nov-83 10:41:44-CST (Sat)
From: Overbeek@ANL-MCS (Overbeek)
Subject: Stalking The Gigalip
[Reprinted from the Prolog Digest.]
E. W. Lusk and I recently wrote a short note concerning attempts
to produce high-speed Prolog machines. I apologize for perhaps
restating the obvious in the introduction. In any event we
solicit comments.
Stalking the Gigalip
Ewing Lusk
Ross A. Overbeek
Mathematics and Computer Science Division
Argonne National Laboratory
Argonne, Illinois 60439
1. Introduction
The Japanese have recently established the goal of pro-
ducing a machine capable of producing between 10 million and
1 billion logical inferences per second (where a logical
inference corresponds to a Prolog procedure invocation).
The motivating belief is that logic programming unifies many
significant areas of computer science, and that expert sys-
tems based on logic programming will be the dominant appli-
cation of computers in the 1990s. A number of countries
have at least considered attempting to compete with the
Japanese in the race to attain a machine capable of such
execution rates. The United States funding agencies have
definitely indicated a strong desire to compete with the
Japanese in the creation of such a logic engine, as well as
in the competition to produce supercomputers that can
deliver at least two orders of magnitude improvement (meas-
ured in megaflops) over current machines. Our goal in writ-
ing this short note is to offer some opinions on how to go
about creating a machine that could execute a gigalip. It
is certainly true that the entire goal of creating such a
machine should be subjected to severe criticism. Indeed, we
feel that it is probably the case that a majority of people
in the AI research community feel that it offers (at best) a
misguided effort. Rather than entering this debate, we
shall concentrate solely on discussing an approach to the
goal. In our opinion a significant component of many of the
proposed responses by researchers in the United States is
based on the unstated assumption that the goal itself is not
worth pursuing, and that the benefits will accrue from addi-
tional funding to areas in AI that only minimally impinge on
the stated objective.
[ This paper is available on {SU-SCORE} as:
PS:<Prolog>ANL-LPHunting.Txt
There is a limited supply of hard copies that
can be mailed to those with read-only access
to this newsletter -ed ]
------------------------------
Date: Monday, 7 November 1983 12:03:23 EST
From: Robert.Frederking@CMU-CS-CAD
Subject: Intelligence; theoretical speed
Not to stir this up again, but around here, some people like the
definition that intelligence is "knowledge brought to bear to solve
problems". This indicates that you need knowledge, ways of applying it, and
a concept of a "problem", which implies goals. One problem with measuring
human "IQ"s is that you almost always end up measuring (at least partly) how
much knowledge someone has, and what culture they're part of, as well as the
pure problem solving capabilities (if any such critter exists).
As for the theoretical speed of processing, the speed of light is a
theoretical limit on the propagation of information (!), not just matter, so
the maximum theoretical cycle speed of a processor with a one foot long
information path (mighty small) is a nanosecond (not too fast!). So the
question is, what is the theoretical limit on the physical size of a
processor? (Or, how do you build a transistor out of three atoms?)
------------------------------
Date: 4 Nov 83 7:01:30-PST (Fri)
From: harpo!eagle!mhuxl!mhuxm!pyuxi!pyuxss!aaw @ Ucb-Vax
Subject: Humor
Article-I.D.: pyuxss.196
[Semi-Summary of Halting Problem Disc]
must have been some kind of joke. Sigmunds' book is a real layman
thing, and in it he asserts that the joke
a: where are you going?
b: MINSKY
a: you said "minsky" so I'd think you are going to "pinsky". I
happen to know you are going to "minsky" so whats the use in lying?
is funny.
aaron werman pyuxi!pyuxss!aaw
------------------------------
Date: 05 Nov 83 1231 PST
From: Jussi Ketonen <JK@SU-AI>
Subject: Inscrutable Intelligence
On useless discussions - one more quote by Wittgenstein:
Wovon man nicht sprachen kann, darueber muss man schweigen.
------------------------------
Date: 05 Nov 83 0910 PST
Date: Fri, 4 Nov 83 19:28 PST
From: Moshe Vardi <vardi@Diablo>
Subject: Knowledge Seminar
Due to the overwhelming response to my announcement and the need to
find a bigger room, the first meeting is postponed to Dec. 9,
10:00am.
Moshe Vardi
------------------------------
Date: Thu, 3 Nov 1983 22:50 EST
From: HEWITT%MIT-OZ@MIT-MC.ARPA
Subject: SEMINAR
[Forwarded by SASW@MIT-MC.]
Date: Thursday, November 10, l983 3:30 P.M.
Place: NE43 8th floor Playroom
Title: "Some Fundamental Limitations of Logic Programming"
Speaker: Carl Hewitt
Logic Programming has been proposed by some as the universal
programming paradigm for the future. In this seminar I will discuss
some of the history of the ideas behind Logic Programming and assess
its current status. Since many of the problems with current Logic
Programming Languages such as Prolog will be solved, it is not fair to
base a critique of Logic Programming by focusing on the particular
limitations of languages like Prolog. Instead I will focus discussion
on limitations which are inherent in the enterprise of attempting to
use logic as a programming language.
------------------------------
Date: Thu 3 Nov 83 10:44:08-PST
From: Ron Brachman <Brachman at SRI-KL>
Subject: AAAI-84 Call for Papers
CALL FOR PAPERS
AAAI-84
The 1984 National Conference on Artificial Intelligence
Sponsored by the American Association for Artificial Intelligence
(in cooperation with the Association for Computing Machinery)
University of Texas, Austin, Texas
August 6-10, 1984
AAAI-84 is the fourth national conference sponsored by the American
Association for Artificial Intelligence. The purpose of the conference
is to promote scientific research of the highest caliber in Artificial
Intelligence (AI), by bringing together researchers in the field and by
providing a published record of the conference.
TOPICS OF INTEREST
Authors are invited to submit papers on substantial, original, and
previously unreported research in any aspect of AI, including the
following:
AI and Education Knowledge Representation
(including Intelligent CAI) Learning
AI Architectures and Languages Methodology
Automated Reasoning (including technology transfer)
(including automatic program- Natural Language
ming, automatic theorem-proving, (including generation,
commonsense reasoning, planning, understanding)
problem-solving, qualitative Perception (including speech, vision)
reasoning, search) Philosophical and Scientific
Cognitive Modelling Foundations
Expert Systems Robotics
REQUIREMENTS FOR SUBMISSION
Timetable: Authors should submit five (5) complete copies of their
papers (hard copy only---we cannot accept on-line files) to the AAAI
office (address below) no later than April 2, 1984. Papers received
after this date will be returned unopened. Notification of acceptance
or rejection will be mailed to the first author (or designated
alternative) by May 4, 1984.
Title page: Each copy of the paper should have a title page (separate
from the body of the paper) containing the title of the paper, the
complete names and addresses of all authors, and one topic from the
above list (and subtopic, where applicable).
Paper body: The authors' names should not appear in the body of the
paper. The body of the paper must include the paper's title and an
abstract. This part of the paper must be no longer than thirteen (13)
pages, including figures but not including bibliography. Pages must be
no larger than 8-1/2" by 11", double-spaced (i.e., no more than
twenty-eight (28) lines per page), with text no smaller than standard
pica type (i.e., at least 12 pt. type). Any submission that does not
conform to these requirements will not be reviewed. The publishers will
allocate four pages in the conference proceedings for each accepted
paper, and will provide additional pages at a cost to the authors of
$100.00 per page over the four page limit.
Review criteria: Each paper will be stringently reviewed by experts in
the area specified as the topic of the paper. Acceptance will be based
on originality and significance of the reported research, as well as
quality of the presentation of the ideas. Proposals, surveys, system
descriptions, and incremental refinements to previously published work
are not appropriate for inclusion in the conference. Applications
clearly demonstrating the power of established techniques, as well as
thoughtful critiques and comparisons of previously published material
will be considered, provided that they point the way to new research in
the field and are substantive scientific contributions in their own
right.
Submit papers and Submit program suggestions
general inquiries to: and inquiries to:
American Association for Ronald J. Brachman
Artificial Intelligence AAAI-84 Program Chairman
445 Burgess Drive Fairchild Laboratory for
Menlo Park, CA 94025 Artificial Intelligence Research
(415) 328-3123 4001 Miranda Ave., MS 30-888
AAAI-Office@SUMEX Palo Alto, CA 94304
Brachman@SRI-KL
------------------------------
End of AIList Digest
********************
∂07-Nov-83 2245 @SU-SCORE.ARPA:YM@SU-AI Student Committee Members - 83/84
Received: from SU-SCORE by SU-AI with TCP/SMTP; 7 Nov 83 22:45:23 PST
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Mon 7 Nov 83 22:38:44-PST
Date: 07 Nov 83 2238 PST
From: Yoni Malachi <YM@SU-AI>
Subject: Student Committee Members - 83/84
To: faculty@SU-SCORE, students@SU-SCORE, staff@SU-SCORE
(name@host) denotes preferred electronic MAILing address.
(faculty-person) next to the committee name denotes a chairperson
Admissions: (Brian Reid)
83/84-- Yoram Moses (YOM@SAIL)
Marianne Winslett (WINSLETT@SCORE)
Appointments and Promotions: (as appropriate)
83/84-- (to be elected depending on subfield [only for appointments])
Art/Decoration:
83/84--Carol Twombly (CXT@SAIL)
Bicycle:
83/84--
Colloquium (cookies & juice):
83/84--Ginger Edighoffer (HARKNESS@SU-SCORE),
Jay Gischer (GISCHER@NAVAJO)
Colloquium AV and advertising:
83/84-- (AV unneeded when held in Terman Aud, Sharon does advertising)
Comprehensive:
Winter 83/84-- (Don Knuth)
Software Systems: Per Bothner, (BOTHNER@SCORE)
Tracy Larrabee, (TRACY@SCORE)
Hardware Systems: Stefan Demetrescu, (STEFAN@SCORE)
Alg. and Data Struct.: Oren Patashnik, (PATASHNIK@SCORE)
MTC: Yoram Moses, (YOM@SAIL)
NA: Billy Wilson, (WILSON@SCORE)
AI: Jitendra Malik, (JMM@SAIL)
Spring 83/84-- (Rob Schreiber)
Communications (physical bulletin boards):
83/84--Arun Swami (ARUN@SCORE)
Computer Facilities:
83/84--Peter Karp (KARP@SUMEX), Jeff Mogul (MOGUL@SCORE)
Computer Forum: (Miller, Tajnai)
82/83--Richard Treitel (TREITEL@SUMEX)
83/84--Peter Rathmann (PKR@SAIL)
Computer Science/Artificial Intelligence: (Bruce Buchanan)
83/84--Haym Hirsh (HAYM@SCORE)
Computer Usage Policy: (Jeff Ullman)
83/84--Andrei Broder (BRODER@SCORE), Victoria Pigman (PIGMAN@SUMEX)
Course Evaluation:
83/84-- none. Will be done by ASSU
Curriculum: (Bob Floyd)
83/84--Kenneth Brooks (BROOKS@DIABLO)
Faculty Search Committee: (as appropriate)
83/84-- (to be elected depending on subfield)
Fellowships and Awards: (Tajnai)
83/84--Joan Feigenbaum (JF@SCORE),
Graduate Student Council:
83/84--
Industrial Lectureship: (McCarthy)
83/84--Chad Mitchell (M.CHAD@SIERRA)
Intramurals (sports):
83/84--Ben Grosof (GROSOF@SCORE)
Library/Publication:
83/84--Paul Asente (ASENTE@SHASTA)
LOTS Liaison:
83/84--Arthur Keller (ARK)
Masters Admissions and Policy: (Joe Oliger)
83/84 - Rob Nagler (NAGLER@SCORE), Chuck Engle (ENGLE@SCORE)
Odd Jobs:
83/84-- to be handled by the bureaucrats
Orientation:
82/83--Frank Yellin (FY), Joan Feigenbaum (JF@SCORE),
Cary Gray (CGG@SAIL), Peter Karp (KARP@SUMEX),
Jean-Luc Brouillet(BROUILLET@SCORE)
83/84-- (to be elected later in the year)
Photo:
83/84--Jean-Luc Brouillet(BROUILLET@SCORE)
Prancing Pony:
83/84--Marty Frost (ME), Arthur Keller (ARK), Allan Miller (AAM),
Joe Weening (JJW)
Refrigerator:
83/84--
Social:
83/84--Marvin Theimer (THEIMER@SCORE), Peter Karp (KARP@SUMEX)
Space: (Joe Oliger, Marlie Yearwood)
83/84--Richard Treitel (TREITEL@SUMEX), Chuck Engle (ENGLE@SCORE)
Student Bureaucrats:
Winter83/Fall83--Yoni Malachi (YM)
Fall83/Winter84--Oren Patashnik (PATASHNIK@SCORE)
Winter84/Spring84--Eric Berglund (BERGLUND@DIABLO)
Spring84/Fall84--
TGIF:
83/84--Joe Weening (JJW),
Allen Van Gelder (AVG@DIABLO),
Harry Mairson (MAI)
∂08-Nov-83 0927 LB@SRI-AI.ARPA MEETING 11/10 - CSLI Building Options
Received: from SRI-AI by SU-AI with TCP/SMTP; 8 Nov 83 09:27:43 PST
Date: Tue 8 Nov 83 09:29:17-PST
From: LB@SRI-AI.ARPA
Subject: MEETING 11/10 - CSLI Building Options
To: CSLI-PRINCIPALS@SRI-AI.ARPA
cc: lb@SRI-AI.ARPA
There will be a meeting this Thursday from 1:15 p.m. to
2:00 p.m. in the Conference Room to discuss options for the
CSLI building.
Stanley Peters
-------
∂08-Nov-83 1101 ULLMAN@SU-SCORE.ARPA computer policy
Received: from SU-SCORE by SU-AI with TCP/SMTP; 8 Nov 83 11:01:44 PST
Date: Tue 8 Nov 83 10:56:38-PST
From: Jeffrey D. Ullman <ULLMAN@SU-SCORE.ARPA>
Subject: computer policy
To: faculty@SU-SCORE.ARPA
cc: broder@SU-SCORE.ARPA, pigman@SUMEX-AIM.ARPA
We now have a committee to come up with a revised policy on computer
accounts, consisting of Keith Lantz, Andre Broder, Victoria Pigman,
and myself. As I was not present at the meeting where the original
draft was discussed, and the minutes refers only to discontent with
the way the document deals with coursework on research machines,
can someone fill me in on the problem?
-------
∂08-Nov-83 1908 @SRI-AI.ARPA:desRivieres.PA@PARC-MAXC.ARPA CSLI Activities for Thursday Nov. 10th
Received: from SRI-AI by SU-AI with TCP/SMTP; 8 Nov 83 19:07:57 PST
Received: from PARC-MAXC.ARPA by SRI-AI.ARPA with TCP; Tue 8 Nov 83 19:06:12-PST
Date: Tue, 8 Nov 83 19:03 PST
From: desRivieres.PA@PARC-MAXC.ARPA
Subject: CSLI Activities for Thursday Nov. 10th
To: csli-friends@SRI-AI.ARPA
Reply-to: desRivieres.PA@PARC-MAXC.ARPA
CSLI SCHEDULE FOR THURSDAY, NOVEMBER 10, 1983
10:00 Research Seminar on Natural Language
Speaker: Ron Kaplan (CSLI-Xerox)
Title: "Linguistic and Computational Theory"
Place: Redwood Hall, room G-19
12:00 TINLunch
Discussion leader: Martin Kay (CSLI-Xerox)
Paper for discussion: "Processing of Sentences with
Intra-sentential
Code-switching"
by A.K. Joshi,
COLING 82, pp. 145-150.
Place: Ventura Hall
2:00 Research Seminar on Computer Languages
Speaker: Glynn Winskel (CMU)
Title: "The Semantics of Communicating Processes"
Place: Redwood Hall, room G-19
3:30 Tea
Place: Ventura Hall
4:15 Colloquium
Speaker: Michael Beeson (San Jose State University)
Title: "Computational Aspects of Intuitionistic Logic"
Place: Redwood Hall, room G-19
Note to visitors:
Redwood Hall is close to Ventura Hall on the Stanford Campus. It
can be reached from Campus Drive or Panama Street. From Campus Drive
follow the sign for Jordan Quad. $0.75 all-day parking is
available in a lot located just off Campus Drive, across from the
construction site.
∂09-Nov-83 0228 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #50
Received: from SU-SCORE by SU-AI with TCP/SMTP; 9 Nov 83 02:28:11 PST
Date: Tuesday, November 8, 1983 11:52PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #50
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Wednesday, 9 Nov 1983 Volume 1 : Issue 50
Today's Topics:
Implementations - User Convenience Vs. Elegance & Performance & I/O
----------------------------------------------------------------------
Date: Mon 7 Nov 83 15:48:01-MST
From: Uday Reddy <U-Reddy@UTAH-20>
Subject: More on Referential Transparency of =..
Re: Pereira, Referential Transparency, PROLOG Digest V1 #49 (11.7.83)
Fernando Pereira states that =.. is referentially transparent because
the function symbols in Prolog are uninterpreted. This argument is
valid provided =.. is used only with terms constructed by function
symbols, not predicate symbols. If p is a predicate symbol, then
p(A) =.. X
is not referentially transparent.
Since this condition is not adhered to in Prolog, Pereira's argument
merely transfers the blemish from =.. to call.
Provided =.. is used in a referentially transparent way, is it
first-order or second-order ? The answer is naturally that it
is second-order, because one of its arguments contains a function.
Incidentally, I don't see why "is" is referentially opaque.
One can extend the semantics to include integers, interpreting
"is" as a relation mapping Herbrand terms to integers.
-- Uday Reddy
------------------------------
Date: Saturday, 5-Nov-83 03:10:17-GMT
From: O'Keefe HPS (on ERCC DEC-10) <OKeefe.R.A. at EDXA>
Subject: Speed of Waterloo Prolog
Someone recently said in this Digest that Waterloo Prolog was
an interpreter that was as fast as DEC-10 Prolog. Now I have heard
nothing but good about Waterloo Prolog, and have recommended it to
a couple of people, so don't take this as any sort of attack on it.
The thing is, you've got to take the machine into consideration
as well. Here are some figures to show what's happening:
Dec-10 compiled on KI-10/Bottoms-10 20k LIPS
C-Prolog v1.4.EdAI on VAX 750/UNIX 1k LIPS
C-Prolog v1.4.EdAI on VAX 780/VMS 1.8kLIPS (estimate)
C-Prolog (??version??) IBM 3081/(??) 11k LIPS
If Waterloo Prolog is absolutely as fast as DEC-10 Prolog,
then it is relatively about twice as fast as C-Prolog. Given
that C-Prolog is written in C, and has not been tuned for IBMs,
and that Waterloo Prolog is said to be written in assembler,
this is a good but not earth-shaking result. A compiler of the
same quality might be expected to yield 200k LIPS.
What we need is more systems that are at least twice as
fast as DEC-10 Prolog. People without 3081s are still waiting
for one that is half as fast. (If yours is that good or better,
PLEASE tell us.)
------------------------------
Date: 7-Nov-83 11:53:10-CST (Mon)
From: Gabriel@ANL-MCS (John Gabriel)
Subject: I/O
A very brief comment on the remark in Digest #48 that not only must
useful programs do I/O, but also many useful programs are more than
simply functions. First of all "Hear Hear" from my English heritage,
or "Right On" from my American experience.
The distinction being made is the distinction between a function,
simply mapping points from one space to another, and a functional
(see for example T. Levi Civita, The Absolute Differential Calculus,
Blackie and Son, London & Glasgow 1926,1929,1946,1947,1950) which
is an object computed by passage along a path between two points
in a space. A functional differs from a function because its value
depends not only on the end points of the path, but on the path,
I.e. the history.
Many of the interesting objects of physics (I.e. the real
world) are functionals, not functions, simply because they depend
on the paths I.e. the histories. Here is an example:- Suppose we
are dealing with a collection of NAND logic gates wired together
without feedback paths I.e. the graph of information flow is
Directed and Acyclic. This system may be modelled in Prolog by
a single generic definition of a NAND gate, predicates defining
the Dataflow graph, and a little recursion.
But the moment several J-K flip-flops are placed in the
circuit, a generic definition of J-K flip flop is not sufficient,
information must be carried about the state of each instance of
J-K flip flop, a clock added to the system, and the computation
of the state at time T done from the state at time T-1.
Now, I think this can be done in "pure" Prolog without
assert and retract, by recursion on T, but even relatively small
T values such as T=100 seem likely to lead to stack overflows
and other such embarrassments, simply because the recursion
keeps ALL the history, not just the immediately previous state.
So for my purposes, assert and retract are important
capabilities, not because they cannot be replaced by recursion,
but because the recursive replacement is harder to think about,
gives me less flexibility, and exposes me to shortages of
computing resources in doing practical problems of interest.
I take some risks in deleting old unit clauses for T
values believed to be no longer of interest, because if I make
a mistake and delete something too early, I will reach incorrect
conclusions.
Perhaps I gain relative to recursion even if I keep all
state history as unit clauses, because I get to pop all stacks
fairly often, and so the problems of the global stack filling up
with "holes" go away. That's a question about implementation.
Other comments seem close to my concerns. I would like
to partition the clause space somehow, to give my knowledge base
perspective (W.A. Woods IEEE Computer Society Oct or Nov 1983 -
the issue about Knowledge Bases), so that the things of immediate
interest are close to the program's attention span, and all else
is temporarily forgotten. Something like Bill Woods' comments
elsewhere about being able to push the current environment onto
a stack, delve deeper into a specialised knowledge base, develop
a unit clause from that detail, pop the stacked environment and
add to it the new unit clause.
All of the questions about distributed databases arise
in this context - consistency, synchronisation, etc. etc. But
on the other hand a solution to some of the problems of dataflow
in Prolog by a "perspective" mechanism might generate programming
styles having extensive high level parallelism and suitable for
use on wide parallel architectures. Incidentally even this insight
is not really my own, there is a BBN internal report by Bill
Woods some five years old exploring these issues.
------------------------------
Date: Saturday, 5-Nov-83 17:26:24-GMT
From: O'Keefe HPS (on ERCC DEC-10) <OKeefe.R.A. at EDXA>
Subject: Univ Continued, Partitioned Data Bases
I am obliged to agree with much of what Uday Reddy says. But
definitely not all. I should perhaps apologise for being a little
bit lax about what I meant by "first-order". The fact that call,
univ, arg, functor, ancestors, subgoal←of can all be transformed
away so easily means that they have first-order POWER. ancestors
and subgoal←of are a bit remote from logic, and I am quite happy
to concede that they are not any order, and I'd be perfectly
content if I never saw them again. But the case of functor,
arg, call, and univ is very different. The "transformation"
required is
Just Fill In The Definitions
That is, any *logic program* using functor, arg, call, name
and univ can be *completed* by adding clauses to define those
five predicates, NO change being made to any of the original
clauses. The meaning of functor..univ is thus in some sense
dependent on the program, but any fixed program can be completed
to a proper logic program.
The *REAL* trouble-maker is 'var'. Anything Reddy cares to
say about the illogicality of 'var' has my full support. We could
imagine having predicates %integer, %atom which recognised integers
and atoms, they would then be first-order the same way that functor
is. The current definitions aren't quite that, though, they are
integer(X) :- nonvar(X), %integer(X).
atom(X) :- nonvar(X), %atom(X).
The horrible thing about var is that it can succeed with a given
argument and a short while later fail with exactly the same
argument. Oddly enough, var doesn't have to be a primitive. If
we have cuts,
var(X) :-
nonvar(X), !, fail.
var(X).
nonvar(X) :- % vars unify with 27
X \= 27, !. % so does 27, but it
nonvar(X) :- % doesn't unify with
X \= 42. % 42 (vars do again).
X \= X :- !,
fail.
X \= Y.
It is because of this that I was careful to say that functor, arg,
univ, and call are first-order *in logic programs*. What they
may be in Prolog I have no idea.
functor, arg, and univ cause the type checker TypeCh.Pl no
end of trouble. Basically the problem is that knowing the type
of T in
arg(N, T, A)
doesn't help you find the type of A, and the elements of the list
yielded by univ have different types. But then there are other
perfectly good logic programs by anyone's standards that defeat
the type checker.
The heart of the disagreement between Reddy and me is in the
following paragraph from Reddy's message:
Prolog has several features that violate referential
transparency. Some of them are var, functor, arg, univ
and call. To see why, consider the simple example
1 + 2 =.. [+, 1, 2]
Since 1+2 denotes a semantic object (the integer 3) its
syntactic structure should be transparent to the program.
But using =.. allows the program to look at its syntactic
structure. 2+1 denotes the semantic object as 1+2. But,
replacing 1+2 by 2+1 in the above literal does not
preserve it's truthful-ness.
Wrong. 1+2 does NOT denote 3. It denotes the tree +(1,2) . In a
piece of logic, terms do not denote anything but themselves until
you specify an interpretation, and then the interpretation relation
lies OUTSIDE the program. I am perfectly free to set up a model in
which 1+2 denotes "for all X not both elephant(X) and forgets(X)"
[there is no reason why a model cannot be another logic]. The only
semantic objects (in a computational sense) in Prolog are TREEs,
and if some predicates happen to interpret some trees as if they
were arithmetic expressions, so what ? Certainly there is a Prolog
relation which states "1+2 =:= 3", but I am at liberty to define
another such relation according to which "1+2 =::= 1". I repeat,
in Prolog and logic programming generally, 1+2 just (computationally)
denotes +(1,2) and any other meaning we may want to give it is our
problem.
The thing is, logic programs (as currently defined) do not use
paramodulation or demodulation. Michael Fay's paper in the 4th CADE
looks as though it might yield a clean way of building in equational
theories (he shows how to handle complete sets of rewrites in the
unification algorithm). In a logic programming language where say
arithmetic was built into the unification algorithm, then indeed it
would be true that "1 + 2" and "3" would be the same thing. But in
such a system "3" would match "1+X+Y" as well, at least 3 ways.
Wayne Christopher seems to have heard of Prolog/KR. If you
can specify which clause pools are to be searched, and into which
clause pool(s) a new clause is to be inserted, there is no need
to create or destroy clause pools: you think of them as always
existing but as being empty until you put something into them (no
need to create) & you leave it to the garbage collector to page
out pools you haven't used in a while (no need to destroy).
You don't actually get any extra logical power, as you can
add the name of the data base in which clause lives as an extra
argument of its head, and so
call←in(db1 v db2 v db3, foo(X, Y))
becomes
( foo(db1, X, Y) ; foo(db2, X, Y) ; foo(db3, X, Y) )
I must admit that I don't see how this gives us "more control
over evaluation." I'm not saying it doesn't, just that I don't
understand. Could you go into a bit more detail, Wayne Christopher ?
Actually, I'm not at all sure that I *want* more control over
evaluation. Prolog has now been taught in several Universities
for periods ranging from one to five years (that I know of) and
has been taught to schoolchildren for about three. All the
Prolog teachers I have talked with agree that
the LOGIC part of Prolog is easy to teach
the CONTROL part of Prolog is hard to teach.
Remember also that CONNIVER was eventually abandoned...
I have used the DEC-10 Prolog "recorded" data base a fair
bit. That is in fact why I want a replacement ! I can testify
that a partitioned data base where you have to figure out for
yourself what holes to put things down is no substitute for a
good indexing scheme. If you can request an index on any or all
argument positions of a predicate (IC-Prolog does this), and
especially if you could specify a multi-level index on some
argument positions, you can simulate any partitioning system you
like. Efficiently. If you want to handle "very large data
bases", I would suggest John Lloyd's dynamic hashing. It has
been used in at least two Prolog interpreters already. His
reports are available from Melbourne University.
------------------------------
End of PROLOG Digest
********************
∂09-Nov-83 1532 GOLUB@SU-SCORE.ARPA Lunch on Tuesday, Nov 15
Received: from SU-SCORE by SU-AI with TCP/SMTP; 9 Nov 83 15:32:47 PST
Date: Wed 9 Nov 83 15:30:54-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Lunch on Tuesday, Nov 15
To: faculty@SU-SCORE.ARPA
I'm pleased to say that Gordon Bower, our associate dean, will join
us for lunch on Tuesday, Nov 15. GENE
-------
∂09-Nov-83 1618 @SU-SCORE.ARPA:YM@SU-AI Some points for thoughts and discussion for the Town Meeting:
Received: from SU-SCORE by SU-AI with TCP/SMTP; 9 Nov 83 16:18:03 PST
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Wed 9 Nov 83 16:13:50-PST
Date: 09 Nov 83 1613 PST
From: Yoni Malachi <YM@SU-AI>
Subject: Some points for thoughts and discussion for the Town Meeting:
To: students@SU-SCORE, faculty@SU-SCORE
Reply-To: bureaucrat@score
**Students moving with HPP to Welch Road.
**A terminal on every desk.
**More public terminal rooms.
**Student/faculty input on space-related decisions.
**Use of the small room next to the lounge.
**Improving the basement terminal area.
**Trying to integrate MS and PhD students better.
**Solving the long-term space problem.
**Supporting students from a common pool rather than through individual
projects.
**Using CSD machines for coursework.
∂09-Nov-83 2344 LAWS@SRI-AI.ARPA AIList Digest V1 #95
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Nov 83 23:44:13 PST
Date: Wednesday, November 9, 1983 5:08PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #95
To: AIList@SRI-AI
AIList Digest Thursday, 10 Nov 1983 Volume 1 : Issue 95
Today's Topics:
Alert - Hacker's Dictionary,
Conference - Robotic Intelligence and Productivity,
Tutorial - Machine Translation,
Report - AISNE meeting
----------------------------------------------------------------------
Date: 8 Nov 1983 1215:19-EST
From: Lawrence Osterman <OSTERMAN@CMU-CS-C.ARPA>
Subject: Guy Steele's
[Reprinted from the CMU-C bboard.]
New book is now out.
The Hacker's Dictionary, Available in the CMU Bookstore
right now. The cost is 5.95 (6.31 after taxes) and its well
worth getting (It includes (among other things) The COMPLETE
INTERCAL character set (ask anyone in 15-312 last fall),
Trash 80,N, Moby, and many others (El Camino Bignum?))
Larry
[According to another message, the CMU bookstore immediately
sold out. -- KIL]
------------------------------
Date: 7 Nov 1983 1127-PST
From: MEDIONI@USC-ECLC
Subject: Conference announcement
****** CONFERENCE ANNOUCEMENT ******
ROBOTIC INTELLIGENCE AND PRODUCTIVITY CONFERENCE
WAYNE STATE UNIVERSITY, DETROIT, MICHIGAN
NOVEMBER 18-19, 1983
For more information and advance program, please contact:
Dr Pepe Siy
(313) 577-3841
(313) 577-3920 - Messages
or Dr Singh
(313) 577-3840
------------------------------
Date: Tue 8 Nov 83 10:06:34-CST
From: Jonathan Slocum <LRC.Slocum@UTEXAS-20.ARPA>
Subject: Tutorial Announcement
[The following is copied from a circular, with the author's encouragement.
Square brackets delimit my personal insertions, for clarification. -- JS]
THE INSTITUT DALLE MOLLE POUR LES ETUDES SEMANTIQUES ET
COGNITIVES DE L'UNIVERSITE DE GENEVE ("ISSCO") is to hold
a Tutorial on
MACHINE TRANSLATION
from Monday 2nd April to Friday 6th, 1984, in Lugano, Switzerland
The attraction of Machine Translation as an application domain for
computers has long been recognized, but pioneers in the field seriously
underestimated the complexity of the problem. As a result, early
systems were severely limited.
The design of more recent systems takes into account the
interdisciplinary nature of the task, recognizing that MT involves the
construction of a complete system for the collection, representation,
and strategic deployment of a specialised kind of linguistic knowledge.
This demands contribution from the fields of both theoretical and
computational linguistics, conputer science, and expert system design.
The aim of this tutorial is to convey the state of the art by allowing
experts in different aspects of MT to present their particular points of
view. Sessions covering the historical development of MT and its
possible future evolution will also be included to provide a tutorial
which should be relevant to all concerned with the relationship between
natural language and computer science.
The Tutorial will take place in the Palazzo dei Congressi or the Villa
Heleneum, both set in parkland on the shore of Lake Lugano, which is
perhaps the most attractive among the lakes of the Swiss/Italian Alps.
Situated to the south of the Alpine massif, Spring is early and warm.
Participants will be accommodated in nearby hotels. Registration will
take place on the Sunday evening preceding the Tutorial.
COSTS: Fees for registration submitted by January 31, 1984, will be 120
Swiss franks for students, 220 Swiss franks for academic participants,
and 320 Swiss franks for others. After this date the fees will increase
by 50 Swiss franks for all participants. The fees cover tuition,
handouts, coffee, etc. Hotel accommodation varies between 30 and 150
Swiss franks per night [booking form available, see below]. It may be
possible to arrange cheaper [private] accommodation for students.
FOR FURTHER INFORMATION [incl. booking forms, etc.] (in advance of the
Tutorial) please contact ISSCO, 54 route des Acacias, CH-1227 Geneva; or
telephone [41 for Switzerland] (22 for Geneva) 20-93-33 (University of
Geneva), extension ("interne") 21-16 ("vingt-et-un-seize"). The
University switchboard is closed daily from 12 to 1:30 Swiss time.
[Switzerland is six (6) hours ahead of EST, thus 9 hours ahead of PST.]
------------------------------
Date: Tue 8 Nov 83 10:59:12-CST
From: Jonathan Slocum <LRC.Slocum@UTEXAS-20.ARPA>
Subject: Tutorial Program
PROVISIONAL PROGRAMME
Each session is scheduled to include a 50-minute lecture followed by a
20-minute discussion period. Most evenings are left free, but rooms
will be made available for informal discussion, poster sessions, etc.
Sun. 1st 5 p.m. to 9 p.m. Registration
Mon. 2nd 9:30 Introductory session M. King [ISSCO]
11:20 A non-conformist's view of the G. Sampson [Lancaster]
state of the art
2:30 Pre-history of Machine Translation B. Buchmann [ISSCO]
4:20 SYSTRAN P. Wheeler [Commission
of the European
Communities]
Tue. 3rd 9:30 An overview of post-65 developments E. Ananiadou [ISSCO]
S. Warwick [ISSCO]
11:20 Software for MT I: background J.L. Couchard [ISSCO]
D. Petitpierre [ISSCO]
2:30 SUSY D. MAAS [Saarbruecken]
4:20 TAUM Meteo and TAUM Aviation P. Isabelle [Montreal]
Wed. 4th 9:30 Linguistic representations in A. De Roeck [Essex]
syntax based MT systems
11:00 AI approaches to MT P. Shann [ISSCO]
12:00 New developments in Linguistics E. Wehrli [UCLA]
and possible implications for MT
3:00 Optional excursion
Thu. 5th 9:30 GETA C. Boitet [Grenoble]
11:20 ROSETTA J. Landsbergen [Philips]
2:30 Software for MT II: R. Johnson [Manchester]
some recent developments M. Rosner [ISSCO]
4:20 Creating an environment for A. Melby [Brigham Young]
the translator
Fri. 5th 9:30 METAL J. Slocum [Texas]
11:20 EUROTRA M. King [ISSCO]
2:30 New projects in France C. Boitet [Grenoble]
4:20 MT - the future A. Zampoli [Pisa]
5:30 Closing session
There will be a 1/2 hour coffee break between sessions. The lunch break
is from 12:30 to 2:30.
------------------------------
Date: Mon, 7 Nov 83 14:01 EST
From: Visions <kitchen%umass-cs@CSNet-Relay>
Subject: Report on AISNE meeting (long message)
BRIEF REPORT ON
FIFTH ANNUAL CONFERENCE OF THE
AI SOCIETY OF NEW ENGLAND
Held at Brown University, Providence, Rhode Island, 4th-5th November 1983.
Programme Chairman: Drew McDermott (Yale)
Local Arrangements Chairman: Eugene Charniak (Brown)
Friday, 4th November
8:00PM
Long talk by Harry Pople (Pittsburgh), "Where is the expertise in
expert systems?" Comments and insights about the general state of
work in expert systems. INTERNIST: history, structure, and example.
9:30PM
"Intense intellectual colloquy and tippling" [Quoted from programme]
LATE
Faculty and students at Brown very hospitably billeted us visitors
in their homes.
Saturday, 5th November
10:00AM
Panel discussion, Ruven Brooks (ITT), Harry Pople (Pittsburgh), Ramesh
Patil (MIT), Paul Cohen (UMass), "Feasible and infeasible expert-systems
applications". [Unabashedly selective and incoherent notes:] RB: Expert
systems have to be relevant, and appropriate, and feasible. There are
by-products of building expert systems, for example, the encouragement of
the formalization of the problem domain. HP: Historically, considering
DENDRAL and MOLGEN, say, users have ultimately made greater use of the
tools and infrastructure set up by the designers than of the top-level
capabilities of the expert system itself. The necessity of taking into
account the needs of the users. RP: What is an expert system? Is
MACSYMA no more than a 1000-key pocket calculator? Comparison of expert
systems against real experts. Expert systems that actually work --
narrow domains in which hypotheses can easily be verified. What if the
job of identifying the applicability of an expert system is a harder
problem than the one the expert system itself solves? In the domains of
medical diagnosis: enormous space of diagnoses, especially if multiple
disorders are considered. Needed: reasoning about: 3D space, anatomy;
time; multiple disorders, causality; demography; physiology; processes.
HP: A strategic issue in research: small-scale, tractable problems that
don't scale up. Is there an analogue of Blocksworld? PC: Infeasible
(but not too infeasible) problems are fit material for research; feasible
problems for development. The importance of theoretical issues in choosing
an application area for research. An animated, general discussion followed.
11:30AM
Short talks:
Richard Brown (Mitre), Automatic programming. Use of knowledge about
programming and knowledge about the specific application domain.
Ken Wasserman (Columbia), "Representing complex physical objects". For
use in a system that digests patent abstracts. Uses frame-like represent-
ation, giving parts, subparts, and the relationships between them.
Paul Barth (Schlumberger-Doll), Automatic programming for drilling-log
interpretation, based on a taxonomy of knowledge sources, activities, and
corresponding transformation and selection operations.
Malcolm Cook (UMass), Narrative summarization. Goal orientations of the
characters and the interactions between them. "Affect state map".
Extract recognizable patterns of interaction called "plot units". Summary
based on how these plot units are linked together. From this summary
structure a natural-language summary of the original can be generated.
12:30PM
Lunch, during which Brown's teaching lab, equipped with 55 Apollos,
was demonstrated.
2:00PM
Panel discussion, Drew McDermott (Yale), Randy Ellis (UMass), Tomas
Lozano-Perez (MIT), Mallory Selfridge (UConn), "AI and Robotics".
DMcD contemplated the effect that the realization of a walking, talking,
perceiving robot would have on AI. He remarked how current robotics
work does entail a lot of AI, but that there is necessary, robotics-
-specific, ground-work (like matrices, a code-word for "much mathematics").
All the other panelists had a similar view of this inter-relation between
robotics and AI. The other panelists then sketched robotics work being
done at their respective institutions. RE: Integration of vision and
touch, using a reasonable world model, some simple planning, and feedback
during the process. Cartesian robot, gripper, Ken Overton's tactile array
sensor (force images), controllable camera, Salisbury hand. Need for AI
in robotics, especially object representation and search. Learning -- a
big future issue for a robot that actually moves about in the world.
Problems of implementing algorithms in real time. For getting started in
robotics: kinematics, materials science, control theory, AI techniques,
but how much of each depends on what you want to do in robotics. TL-P:
A comparatively lengthy talk on "Automatic synthesis of fine motion
strategies", best exemplified by the problem of putting a peg into a hole.
Given the inherent uncertainty in all postions and motions, the best
strategy (which we probably all do intuitively) is to aim the peg just to
one side of the hole, sliding it across into the hole when it hits,
grazing the far side of the hole as it goes down. A method for generating
such a strategy automatically, using a formalism based on configuration
spaces, generalized dampers, and friction cones. MS: Plans for commanding
a robot in natural language, and for describing things to it, and for
teaching it how to do things by showing it examples (from which the robot
builds an abstract description, usable in other situations). A small, but
adequate robotics facility. Afterwards, an open discussion, during which
was stressed how important it is that the various far-flung branches of AI
be more aware of each other, and not become insular. Regarding robotics
research, all panelists agreed strongly that it was absolutely necessary
to work with real robot hardware; software simulations could not hope to
capture all the pernickety richness of the world, motion, forces, friction,
slippage, uncertainty, materials, bending, spatial location, at least not
in any computationally practical way. No substitute for reality!
3:30PM
More short talks
Jim Hendler (Brown), an overview of things going on at Brown, and in the
works. Natural language (story comprehension). FRAIL (frame-based
knowledge representation). NASL (problem solving). An electronic
repair manual, which generates instructions for repairs as needed from
an internal model, hooked up with a graphics and 3D modelling system.
And in the works: expert systems, probabilistic reasoning, logic programming,
problem solving, parallel computation (in particular marker-passing and
BOLTZMANN-style machines). Brown is looking for a new AI faculty member.
[Not a job ad, just a report of one!]
David Miller (Yale), "Uncertain planning through uncertain territory".
How to get from A to B if your controls and sensors are unreliable.
Find a path to your goal, along the path select checkpoints (landmarks),
adjust the path to go within eye-shot of the checkpoints, then off you go,
running demons to watch out for checkpoints and raise alarms if they don't
appear when expected. This means you're lost. Then you generate hypotheses
about where you are now (using your map), and what might have gone wrong to
get you there (based on a self-model). Verify one (some? all?) of these
hypotheses by looking around. Patch your plan to get back to an appro-
priate checkpoint. Verify the whole process by getting back on the beaten
track. Apparently there's a real Hero robot that cruises about a room
doing this.
Bud Crawley (GTE) described what was going on at GTE Labs in AI. Know-
ledge-based systems. Natural-language front-end for data bases.
Distributed intelligence. Machine learning.
Bill Taylor (Gould Inc.), gave an idea of what applied AI research means
to his company, which (in his division) makes digital controllers for
running machines out on the factory floor. Currently, an expert system
for repairing these controllers in the field. [I'm not sure how far along
in being realized this was, I think very little.] For the future, a big,
smart system that would assist a human operator in managing the hundreds
of such controllers out on the floor of a decent sized factory.
Graeme Hirst (Brown, soon Toronto), "Artificial Digestion". Artificial
Intelligence attempts to model a very poorly understood system, the human
cognitive system. Much more immediate and substantial results could be
obtained by modelling a much better understood system, the human digestive
system. Examples of the behavior of a working prototype system on simulated
food input, drawn from a number of illustrative food-domains, including
a four-star French restaurant and a garbage pail. Applications of AD:
automatic restaurant reviewing, automatic test-marketing of new food
products, and vicarious eating for the diet-conscious and orally impaired.
[Forget about expert systems; this is the hot new area for the 80's!]
4:30PM
AISNE Business Meeting (Yes, some of us stayed till the end!)
Next year's meeting will held at Boston University. The position of
programme chairman is still open.
A Final Remark:
All the above is based on my own notes of the conference. At the very
least it reflects my own interests and pre-occupations. Considering
the disorganized state of my notes, and the late hour I'm typing this,
a lot of the above may be just wrong. My apologies to anyone I've
misrepresented; by all means correct me. I hope the general interest of
this report to the AI community outweighs all these failings. LJK
===========================================================================
------------------------------
End of AIList Digest
********************
∂10-Nov-83 0116 DKANERVA@SRI-AI.ARPA Newsletter No. 8, November 10, 1983
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Nov 83 01:11:54 PST
Date: Wed 9 Nov 83 18:18:34-PST
From: DKANERVA@SRI-AI.ARPA
Subject: Newsletter No. 8, November 10, 1983
To: csli-friends@SRI-AI.ARPA
CSLI Newsletter
November 10, 1983 * * * Number 8
CSLI ADVISORY PANEL VISIT
November 17-19
The first meeting of the CSLI Advisory Panel will be held at
Ventura Hall this coming Thursday through Saturday, November 17-19.
The members of the Advisory Panel--Rod Burstall, Jerry Fodor,
George Miller, Nils Nilsson, Barbara Partee, and Bob Ritchie--will
first participate in the usual Thursday activities at the Center, and
CSLI people will have a chance to talk with them there. Then they
will meet Friday morning with the Executive Committee and that
afternoon with the various SL projects. Finally, on Saturday morning,
Jon Barwise and Betsy Macken will meet with the panel members to
discuss the impressions and ideas that came out of their visit.
* * * * * * *
CORRECTION
Glynn Winskel was mistakenly added to the Advisory Panel in last
week's Newsletter through my misreading of a message. I apologize for
any confusion this may have caused. The six members of the Advisory
Panel are as given in the first paragraph above.
- Dianne Kanerva
* * * * * * *
CSLI POSTDOCTORAL FELLOWSHIPS
CSLI is currently accepting applications for postdoctoral
fellowships for a period of one to two years, commencing September 1,
1984. Postdoctoral fellows will participate in an integrated program
of basic research on situated language--language as used by agents
situated in the world to exchange, store, and process information,
including both natural and computer languages.
The deadline for applications is February 15, 1984. Further
information can be obtained from Betsy Macken, Assistant Director,
CSLI, Ventura Hall, Stanford, CA 94305.
* * * * * * *
! Page 2
* * * * * * *
PROJECT A1 - PHONOLOGY, MORPHOLOGY, AND SYNTAX
Project A1 held its first substantive meeting last week. Ron
Kaplan and Martin Kay presented their work on finite-state transducers
as computational models of phonological rule systems. This led to
discussion of a number of formal and conceptual issues, such as
whether there are linguistically interesting rules that cannot be
modeled in this way and whether rules or their corresponding
transducers are the proper domain for formulating various kinds of
linguistic generalizations.
These discussions will be continued at the next meeting,
Wednesday, November 16, 3:30 at PARC, at which Paul Kiparsky will
present additional phenomena that might prove difficult for transducer
modeling.
* * * * * * *
PROJECT A2 - SYNTAX, PHONOLOGY, AND DISCOURSE STRUCTURE
Fairchild sponsored a talk last Monday, November 7, by Dennis
Klatt of the Massachusetts Institute of Technology. Klatt revealed
his methodology for deriving word-level and phrase-level duration
rules for a text-to-speech system. Breakthroughs in this area await
the encoding of knowledge about language into computational systems.
It was observed that more sophisticated syntactic analysis should be
incorporated, and the difficulty of capturing semantic-governed
temporal effects was briefly mentioned.
Susan Stucky's presentation on Makua on November 9 was concerned
with the syntax of focus, more specifically, an analysis of the
syntactic encoding of focus (in GPSG) in the grammar of Makua, a Bantu
language. The analysis is illustrative in several ways. Of
particular interest to people in A2 are the questions regarding which
kinds of syntactic categories (e.g., terminal categories, nonterminal
categories, grammatical functions) are appropriate for stating order
constraints in the grammars of natural languages. This phenomenon (if
not the analysis itself) is also illustrative of the way in which
discourse functions can be syntactically encoded, and so can serve as
a convenient starting point for discussion of these issues.
* * * * * * *
! Page 3
* * * * * * *
CSLI SCHEDULE FOR *THIS* THURSDAY, NOVEMBER 10, 1983
10:00 Research Seminar on Natural Language
Speaker: Ron Kaplan (CSLI-Xerox)
Title: "Linguistic and Computational Theory"
Place: Redwood Hall, room G-19
12:00 TINLunch
Discussion leader: Martin Kay (CSLI-Xerox)
Paper for discussion: "Processing of Sentences with
Intra-sentential Code-switching"
by A.K. Joshi,
COLING 82, pp. 145-150.
Place: Ventura Hall
2:00 Research Seminar on Computer Languages
Speaker: Glynn Winskel (CMU)
Title: "The Semantics of Communicating Processes"
Place: Redwood Hall, room G-19
3:30 Tea
Place: Ventura Hall
4:15 Colloquium
Speaker: Michael Beeson (San Jose State University)
Title: "Computational Aspects of Intuitionistic Logic"
Place: Redwood Hall, room G-19
Note to visitors:
Redwood Hall is close to Ventura Hall on the Stanford Campus. It
can be reached from Campus Drive or Panama Street. From Campus Drive
follow the sign for Jordan Quad. $0.75 all-day parking is available
in a lot located just off Campus Drive, across from the construction
site.
* * * * * * *
! Page 4
* * * * * * *
CSLI SCHEDULE FOR *NEXT* THURSDAY, NOVEMBER 17, 1983
10:00 Research Seminar on Natural Language
Speaker: Stan Rosenschein (CSLI-SRI)
Title: "Issues in the Design of Artificial Agents
That Use Language"
Place: Redwood Hall, room G-19
12:00 TINLunch
Discussion leader: Jerry Hobbs
Paper for discussion: "The Second Naive Physics Manifesto"
by Patrick J. Hayes.
Place: Ventura Hall
2:00 Research Seminar on Computer Languages
Speaker: Mark Stickel (SRI)
Title: "A Nonclausal Connection-Graph
Resolution Theorem-Proving Program"
Place: Redwood Hall, room G-19
3:30 Tea
Place: Ventura Hall
4:15 Colloquium
Speaker: Charles Fillmore, Paul Kay, and
Mary Catherine O'Connor (UC Berkeley)
Title: "Idiomaticity and Regularity:
The Case of `Let Alone'"
Place: Redwood Hall, room G-19
Note to visitors:
Redwood Hall is close to Ventura Hall on the Stanford Campus. It
can be reached from Campus Drive or Panama Street. From Campus Drive
follow the sign for Jordan Quad. $0.75 all-day parking is available
in a lot located just off Campus Drive, across from the construction
site.
* * * * * * *
! Page 5
* * * * * * *
TINLUNCH SCHEDULE
TINLunch is held each Thursday at Ventura Hall on the Stanford
University campus as a part of CSLI activities. Copies of TINLunch
papers are at SRI in EJ251 and at Stanford University in Ventura Hall.
NEXT WEEK: THE SECOND NAIVE PHYSICS MANIFESTO
Patrick J. Hayes
November 10 Martin Kay
November 17 Jerry Hobbs
November 24 THANKSGIVING
December 1 Paul Martin
* * * * * * *
NEXT WEEK'S COMPUTER LANGUAGES SEMINAR
November 17, 2:00 p.m., Ventura Hall
"A Nonclausal Connection-Graph Resolution Theorem-Proving Program"
Mark Stickel, SRI
A theorem-proving program, combining the use of nonclausal
resolution and connection graphs, is described. The use of nonclausal
resolution as the inference system eliminates some of the redundancy
and unreadability of clause-based systems. The use of a connection
graph restricts the search space and facilitates graph search for
efficient deduction. Theory resolution will also be discussed.
Theory resolution constitutes a set of complete procedures for
building nonequational theories into a resolution theorem-proving
program so that axioms of the theory need never be resolved upon.
* * * * * * *
WHY CONTEXT WON'T GO AWAY
On Tuesday, November 8, Stanley Peters from CSLI spoke at the
sixth meeting. Next week, J. Hobbs from SRI will speak (Nov. 15, 3:15
p.m., in Ventura Hall). Given below is the abstract of Peters' talk.
LOGICAL FORM AND CONTEXT
Even linguists who have believed in the existence of a logical
form of language have come to recognize that certain aspects of
meaning are best dealt with in terms of what contexts sentences can be
used in. A case study is the linguistic analysis of presupposition.
In the late 1960s, linguists were analyzing presuppositions in
context-independent semantic terms--e.g., using truth-value gaps.
Then they came to see that use-related features of the phenomena they
were dealing with called for a more pragmatic treatment. Eventually,
Karttunen proposed an analysis that dealt with presupposition in
nonsemantic, purely context-dependent terms. I will recount these
developments, and try to give an indication of what has been meant by
"context" in such linguistic work, as well as of some mechanisms
linguists have employed to relate sentences to appropriate contexts.
* * * * * * *
! Page 6
* * * * * * *
KONOLIGE Ph.D. ORALS
Kurt Konolige will defend his thesis, ``A Deduction Model of
Belief,'' on Tuesday, November 15, at 2:30 p.m. in the first-floor
conference room in Margaret Jacks Hall (the room is still tentative).
The abstract of his dissertation is given below.
A DEDUCTION MODEL OF BELIEF
Reasoning about knowledge and belief of computer and human agents
is assuming increasing importance in Artificial Intelligence systems
in the areas of natural language understanding, planning, and
knowledge representation in general. Current formal models of belief
that form the basis for most of these systems are derivatives of a
possible-world semantics for belief. However, this model suffers from
epistemological and heuristic inadequacies. Epistemologically, it
assumes that agents know all the consequences of their belief. This
assumption is clearly inaccurate, because it doesn't take into account
resource limitations on an agent's reasoning ability. For example, if
an agent knows the rules of chess, it then follows in the
possible-world model that he knows whether white has a winning
strategy or not. On the heuristic side, proposed mechanical deduction
procedures have been first-order axiomatizations of the possible-world
semantics, an indirect and inefficient method of reasoning about
belief.
A more natural model of belief is a deduction model: An agent has
a set of initial beliefs about the world in some internal language,
and a deduction process for deriving some (but not necessarily all)
logical consequences of these beliefs. Within this model, it is
possible to account for resource limitations of an agent's deduction
process; for example, one can model a situation in which an agent
knows the rules of chess but does not have the computational resources
to search the complete game tree before making a move.
This thesis is an investigation of a Gentzen-type formalization
of the deductive model of belief. Several important original results
are proved. Among these are soundness and completeness theorems for a
deductive belief logic, a correspondence result that shows the
possible-worlds model is a special case of the deduction model, and a
modal analog to Herbrand's Theorem for the belief logic. Several
other topics of knowledge and belief are explored in the thesis from
the viewpoint of the deduction model, including a theory of
introspection about self-beliefs, and a theory of circumscriptive
ignorance, in which facts an agent doesn't know are formalized by
limiting or circumscribing the information available to him.
* * * * * * *
! Page 7
* * * * * * *
TALKWARE SEMINAR - CS 377
On Wednesday, November 9, John McCarthy (Stanford CS) spoke on "A
Common Business Communication Language." The problem is to construct
a standard language for computers belonging to different businesses to
exchange business communications. For example, a program for
preparing bids for made-to-order personal computer systems might do a
parts explosion and then communicate with the sales programs of parts
suppliers. A typical message might inquire about the price and
delivery of 10,000 of a certain integrated circuit. Answers to such
inquiries and orders and confirmations should be expressible in the
same language. In a military version, a headquarters program might
inquire how many airplanes of a certain kind were in operating
condition. It might seem that constructing such a language is merely
a grubby problem in standardization suitable for a committee of
businessmen. However, it turns out that the problem actually involves
formalizing a substantial fragment of natural language. What is
wanted is the semantics of natural language, not the syntax. The
lecture covered the CBCL problem, examples of what should be
expressible, ideas for doing it, and connections of the problem to the
semantics of natural language, mathematical logic and non-monotonic
reasoning.
Date: November 16
Speaker: Mike Genesereth (Stanford CS)
Topic: SUBTLE
Time: 2:15 - 4
Place: 380Y (Math corner)
No meeting November 23
Date: November 30
Speaker: Amy Lansky (Stanford / SRI)
Topic: GEM: A Methodology for Specifying Concurrent Systems
Time: 2:15 - 4
Place: 380Y (Math corner)
Date: December 7
Speaker: Donald Knuth (Stanford CS)
Topic: On the Design of Programming Languages
Time: 2:15 - 4
Place: 380Y (Math corner)
Date: December 14
Speaker: Everyone
Topic: Summary and discussion
Time: 2:15 - 4
Place: 380Y (Math corner)
Abstract: We will discuss the talks given during the quarter, seeing
what kind of picture of talkware emerges from them. We will also talk
about possibilities for next quarter. The interest (by both speakers
and the audience) so far indicates that we should continue it, either
in the same format or with changes.
* * * * * * *
! Page 8
* * * * * * *
SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
On Wednesday, November 9, Jose Meseguer of SRI spoke on "The
Computability of Abstract Data Types. Abstract Data Types (ADTs) are
initial models in equationally defined classes of algebras; they are
widely used in current programming languages and programming
methodologies. The talk discussed ADTs, some basic facts about
computable algebras, and recent characterization theorems for
computable ADTs.
NEXT WEEK'S SPEAKER: Yoram Moses, Stanford
TITLE: A Formal Treatment Of Ignorance
TIME: Wednesday, November 16, 4:15-5:30 PM
PLACE: Stanford Mathematics Dept. Faculty Lounge (383-N)
ABSTRACT: We augment well-known decidable variants of (Propositional)
Modal Logics of Knowledge with a validity operator V, where Vp means
that p is valid. The next step involves an attempt to define what we
may claim to know, based on a fixed knowledge base S. An interesting
case is that we can conclude ¬Kp whenever we fail to prove Kp. In
particular, we argue for assuming the axiom ¬Kp ⊃ K¬Kp whenever Kp ⊃
KKp is assumed. Should time allow, we shall remark on the Unexpected
Examination (Hangman's) Paradox, presenting a new approach to
discribing how to avoid the real world's "refuting logic". This talk
will consist of a presentation of research in progress.
COMING EVENTS: November 23 Craig Smorynski
November 30 J. E. Fenstad
* * * * * * *
COMPUTER SCIENCE COLLOQUIUM NOTICE - WEEK OF NOV. 7-11
11/09/1983 Talkware Seminar
Wednesday John McCarthy
2:15-4:00 Stanford U. CS Dept.
380Y (Math Corner) A Common Business Communication Language
11/10/1983 AFLB
Thursday Prof. Alan L. Selman
12:30 Iowa State University
MJH352 From Complexity to Cryptography and Back
11/10/1983 Supercomputer Seminar
Thursday Steve Lundstrum
4:15
11/11/1983 Database Research Seminar
Friday Peter Rathmann
3:15 - 4:30 Stanford
MJH 352 Database Storage on Optical Disks
* * * * * * *
! Page 9
* * * * * * *
SYMPOSIUM ON CONDITIONALS AND COGNITIVE PROCESSES
Department of Linguistics
December 8-11, 1983
CERAS, Room 112
For information and preregistration, write: Conditionals Symposium,
Department of Linguistics, Stanford University, Stanford, CA 94305.
THURSDAY, DECEMBER 8:
2-5 Panel topic: Preliminary Definitions and Distinctions
B. Comrie (USC).
Conditionals: A Typology.
Discussant: J. H. Greenberg (Stanford U.)
P. Johnson-Laird (MRC, Cambridge, England).
Models of conditionals.
Discussant: R. Stalnaker (Cornell U.)
FRIDAY, DECEMBER 9:
9-12 Panel topic: Change in the System of Conditionals.
M. Harris (U. Salford, England).
The Historical Development of Conditional Sentences
in Romance.
Discussant: J. Hawkins (USC)
J. Reilly (UCLA).
The Acquisition of Temporals and Conditionals.
Discussant: A. ter Meulen (U. Groningen)
Panel discussant: C. A. Ferguson (Stanford U.)
2-5 Panel topic: Conditionals, Quantifiers, and Concessives.
F. Veltman (U. Amsterdam).
Data Semantics and the Pragmatics of Indicative
Conditionals.
Discussant: S. Fillenbaum (U. North Carolina)
J. Haiman (U. Manitoba).
Constraints on the Form and Meaning of the Protasis.
Discussant: E. Konig (U. Hanover)
Panel discussant: S. Peters (Stanford U.)
! Page 10
(Conditionals Seminar, continued)
SATURDAY, DECEMBER 10:
9-12 Panel topic: Discourse Functions of Conditionals.
J. van der Auwera (U. Antwerp).
Speech Acts and Conditionals.
Discussant: O. Dahl (U. Stockholm)
B. Lavandera (CONICET, Argentina).
The Textual Function of Conditionals in Spanish.
Discussant: E. C. Traugott (Stanford U.)
Panel discussant: T. Givon (U. Oregon, Eugene)
2-5 Panel topic: Conditionals, Tense, and Modality.
R. Thomason (U. Pittsburgh).
Tense, Mood, and Conditionals.
Discussant: J. Barwise (Stanford U.)
N. Akatsuka (UCLA).
Conditionals Are Contextbound.
Discussant: S. Thompson (UCLA)
Panel discussant: H. Kamp (Bedford College, London)
SUNDAY, DECEMBER 11:
9-12 Panel topic: Syntactic Correlates of Conditionals.
T. Reinhart (Tel Aviv U.).
A Surface Structure Analysis of the "Donkey" Problem.
Discussant: E. Adams (UC Berkeley)
M. Bowerman (Max Planck Institute, Nijmegen).
Conditionals and Syntactic Development in Language
Acquisition.
Discussant: T. Bever (Columbia U.)
Panel discussant: N. Vincent (Cambridge U., England)
* * * * * * *
! Page 11
* * * * * * *
KNOWLEDGE SEMINAR AT IBM, SAN JOSE
We are planning to start at IBM, San Jose, a research seminar on
theoretical aspects of reasoning about knowledge, such as reasoning
with incomplete information, reasoning in the presence of
inconsistencies, and reasoning about changes of belief. The first few
meetings are intended to be introductory lectures on various attempts
at formalizing the problem, such as modal logic, nonmonotonic logic,
and relevance logic. There is a lack of good research in this area,
and the hope is that after a few introductory lectures, the format of
the meetings will shift into a more research-oriented style. The
first meeting is scheduled for Friday, December 9, at 10:00 a.m., with
future meetings also to be held on Fridays, but this may change if
there are a lot of conflicts. The first meeting will be partly
organizational in nature, but there will also be a talk by Joe Halpern
on "Applying modal logic to reason about knowledge and likelihood."
For further details contact:
Joe Halpern (halpern.ibm-sj@rand-relay, (408) 256-4701)
Yoram Moses (yom@su-hnv, (415) 497-1517)
Moshe Vardi (vardi@su-hnv, (408) 256-4936)
If you want to be on the mailing list, contact Moshe Vardi.
* * * * * * *
AAAI-84 CALL FOR PAPERS
The 1984 National Conference on Artificial Intelligence
Sponsored by the American Association for Artificial Intelligence
(in cooperation with the Association for Computing Machinery)
University of Texas, Austin, Texas
August 6-10, 1984
AAAI-84 is the fourth national conference sponsored by the American
Association for Artificial Intelligence. The purpose of the
conference is to promote scientific research of the highest caliber in
Artificial Intelligence (AI), by bringing together researchers in the
field and by providing a published record of the conference. Authors
are invited to submit papers on substantial, original, and previously
unreported research in any aspect of AI, including the following:
AI and Education Knowledge Representation
(including Intelligent CAI) Learning
AI Architectures and Languages Methodology
Automated Reasoning (including technology transfer)
(including automatic program- Natural Language
ming, automatic theorem-proving, (including generation,
commonsense reasoning, planning, understanding)
problem-solving, qualitative Perception (including speech, vision)
reasoning, search) Philosophical and Scientific
Cognitive Modelling Foundations
Expert Systems Robotics
! Page 12
(AAAI-84, continued)
REQUIREMENTS FOR SUBMISSION
Timetable: Authors should submit five (5) complete copies of their
papers (hard copy only---we cannot accept on-line files) to the AAAI
office (address below) no later than April 2, 1984. Papers received
after this date will be returned unopened. Notification of acceptance
or rejection will be mailed to the first author (or designated
alternative) by May 4, 1984.
Title page: Each copy of the paper should have a title page (separate
from the body of the paper) containing the title of the paper, the
complete names and addresses of all authors, and one topic from the
above list (and subtopic, where applicable).
Paper body: The authors' names should not appear in the body of the
paper. The body of the paper must include the paper's title and an
abstract. This part of the paper must be no longer than thirteen (13)
pages, including figures but not including bibliography. Pages must
be no larger than 8-1/2" by 11", double-spaced (i.e., no more than
twenty-eight (28) lines per page), with text no smaller than standard
pica type (i.e., at least 12 pt. type). Any submission that does not
conform to these requirements will not be reviewed. The publishers
will allocate four pages in the conference proceedings for each
accepted paper, and will provide additional pages at a cost to the
authors of $100.00 per page over the four page limit.
Review criteria: Each paper will be stringently reviewed by experts in
the area specified as the topic of the paper. Acceptance will be
based on originality and significance of the reported research, as
well as quality of the presentation of the ideas. Proposals, surveys,
system descriptions, and incremental refinements to previously
published work are not appropriate for inclusion in the conference.
Applications clearly demonstrating the power of established
techniques, as well as thoughtful critiques and comparisons of
previously published material will be considered, provided that they
point the way to new research in the field and are substantive
scientific contributions in their own right.
Submit papers and Submit program suggestions
general inquiries to: and inquiries to:
American Association for Ronald J. Brachman
Artificial Intelligence AAAI-84 Program Chairman
445 Burgess Drive Fairchild Laboratory for
Menlo Park, CA 94025 Artificial Intelligence Research
(415) 328-3123 4001 Miranda Ave., MS 30-888
AAAI-Office@SUMEX Palo Alto, CA 94304
Brachman@SRI-KL
* * * * * * *
-------
∂10-Nov-83 0230 LAWS@SRI-AI.ARPA AIList Digest V1 #94
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Nov 83 02:30:14 PST
Date: Wednesday, November 9, 1983 1:34PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #94
To: AIList@SRI-AI
AIList Digest Wednesday, 9 Nov 1983 Volume 1 : Issue 94
Today's Topics:
Metaphysics - Functionalism vs Dualism,
Ethics - Implications of Consciousness,
Alert - Turing Biography,
Theory - Parallel vs. Sequential & Ultimate Speed,
Intelligence - Operational Definitions
----------------------------------------------------------------------
Date: Mon 7 Nov 83 18:30:07-PST
From: WYLAND@SRI-KL.ARPA
Subject: Functionalism vs Dualism in consciousness
The argument of functionalism versus dualism is
unresolvable because the models are based on different,
complementry paradigms:
* The functionalism model is based on the reductionist
approach, the approach of modern science, which explains
phenomena by logically relating them to controlled,
repeatable, publically verifiable experiments. The
explanations about falling bodies and chemical reactions are
in this catagory.
* The dualism model is based on the miraculous approach,
which explains phenomena as singular events, which are by
definition not controlled, not repeatable, not verifiable,
and not public - i.e., the events are observed by a specific
individual or group. The existance of UFO's, parapsychology,
and the existance of externalized consciosness (i.e. soul) is
in this catagory.
These two paradigms are the basis of the argument of
Science versus Religion, and are not resolvable EITHER WAY. The
reductionist model, based on the philosophy of Parminides and
others, assumes a constant, unchanging universe which we discover
through observation. Such a universe is, by definition,
repeatable and totally predictable: the concept that we could
know the total future if we knew the position and velocity of all
particles derives from this. The success of Science at
predicting the future is used as an argument for this paradigm.
The miraculous model assumes the reality of change, as
put forth by Heraclitus and others. It allows reality to be
changed by outside forces, which may or may not be knowable
and/or predictable. Changes caused by outside forces are, by
definition, singular events not caused by the normal chains of
causality. Our personal consciousness and (by extension,
perhaps) the existance of life in the universe are singular
events (as far as we know), and the basic axioms of any
reductionist model of the universe are, by definition,
unexplainable because they must come from outside the system.
The argument of functionalism versus dualism is not
resolvable in a final sense, but there are some working rules we
can use after considering both paradigms. Any definition of
intellegence, consciousness (as opposed to Consciousness), etc.
has to be based on the reductionist model: it is the only way we
can explain things in such a manner that we can predict results
and prove theories. On the other hand, the concept that all
sources of consciousness are mechanical is a religious position: a
catagorical assumption about reality. It was not that long ago
that science said that stones do not fall from the sky; all
it would take to make UFOs accepted as fact would be for one to
land and set up shop as a merchant dealing in rugs and spices
from Aldebaran and Vega.
------------------------------
Date: Tuesday, 8 November 1983 14:24:55 EST
From: Robert.Frederking@CMU-CS-CAD
Subject: Ethics and Definitions of Consciousness
Actually, I believe you'll find that slavery has existed both with
and without believing that the slave had a soul. In many ancient societies
slaves were of identically the same stock as yourself, they had just run
into serious economic difficulties. As I recall, slavery of the blacks in
the U.S. wasn't justified by their not having souls, but by claiming they
were better off (or similar drivel). The fact that denying other people had
souls was used at some time to justify it doesn't bother me, since all kinds
of other rationalizations have been used.
Now we are approaching the time when we will have intelligent
mechanical slaves. Are you advocating that it should be illegal to own
robots that can pass the Turing (or other similar) test? I think that a
very important thing to consider is that we can probably make a robot really
enjoy being a slave, by setting up the appropriate top-level goals. Should
this be illegal? I think not. Suppose we reach the point where we can
alter fetuses (see "Brave New World" by Aldous Huxley) to the point where
they *really* enjoy being slaves to whoever buys them. Should this be
illegal? I think so. What if we build fetuses from scratch? Harder to
say, but I suspect this should be illegal.
The most conservative (small "c") approach to the problem is to
grant human rights to anything that *might* qualify as intelligent. I think
this would be a mistake, unless you allow biological organisms a distinction
as outlined above. The next most conservative approach seems to me to leave
the situation where it is today: if it is physically an independent human
life, it has legal rights.
------------------------------
Date: 8 Nov 1983 09:26-EST
From: Jon.Webb@CMU-CS-IUS.ARPA
Subject: parallel vs. sequential
Parallel and sequential machines are not equivalent, even in abstract
models. For example, an absract parallel machine can generate truly
random numbers by starting two processes at the same time, which are
identical except that one sends the main processor a "0" and the other
sends a "1". The main processor accepts the first number it receives.
A Turing machine can generate only pseudo-random numbers.
However, I do not believe a parallel machine is more powerful (in the
formal sense) than a Turing machine with a true random-number
generator. I don't know of a proof of this; but it sounds like
something that work has been done on.
Jon
------------------------------
Date: Tuesday, 8-Nov-83 18:33:07-GMT
From: O'KEEFE HPS (on ERCC DEC-10) <okeefe.r.a.@edxa>
Reply-to: okeefe.r.a. <okeefe.r.a.%edxa@ucl-cs>
Subject: Ultimate limit on computing speed
--------
There was a short letter about this in CACM about 6 or 7 years ago.
I haven't got the reference, but the argument goes something like this.
1. In order to compute, you need a device with at least two states
that can change from one state to another.
2. Information theory (or quantum mechanics or something, I don't
remember which) shows that any state change must be accompanied
by a transfer of at least so much energy (a definite figure was
given).
3. Energy contributes to the stress-energy tensor just like mass and
momentum, so the device must be at least so big or it will undergo
gravitational collapse (again, a definite figure).
4. It takes light so long to cross the diameter of the device, and
this is the shortest possible delay before we can definitely say
that the device is in its new state.
5. Therefore any physically realisable device (assuming the validity
of general relativity, quantum mechanics, information theory ...)
cannot switch faster than (again a definite figure). I think the
final figure was 10↑-43 seconds, but it's been a long time since
I read the letter.
I have found the discussion of "what is intelligence" boring,
confused, and unhelpful. If people feel unhappy working in AI because
we don't have an agreed definition of the I part (come to that, do we
*really* have an agreed definition of the A part either? if we come
across a planet inhabited by metallic creatures with CMOS brains that
were produced by natural processes, should their study belong to AI
or xenobiology, and does it matter?) why not just change the name of
the field, say to "Epistemics And Robotics". I don't give a tinker's
curse whether AI ever produces "intelligent" machines; there are tasks
that I would like to see computers doing in the service of humanity
that require the representation and appropriate deployment of large
amounts of knowledge. I would be just as happy calling this AI, MI,
or EAR.
I think some of the contributors to this group are suffering from
physics envy, and don't realise what an operational definition is. It
is a definition which tells you how to MEASURE something. Thus length
is operationally defined by saying "do such and such. Now, length is
the thing that you just measured." Of course there are problems here:
no amount of operational definition will justify any connection between
"length-measured-by-this-foot-rule-six-years-ago" and "length-measured-
by-laser-interferometer-yesterday". The basic irrelevance is that
an operational definition of say light (what your light meter measures)
doesn't tell you one little thing about how to MAKE some light. If we
had an operational definition of intelligence (in fact we have quite a
few, and like all operational definitions, nothing to connect them) there
is no reason to expect that to help us MAKE something intelligent.
------------------------------
Date: 7 Nov 83 20:50:48 PST (Monday)
From: Hoffman.es@PARC-MAXC.ARPA
Subject: Turing biography
Finally, there is a major biography of Alan Turing!
Alan Turing: The Enigma
by Andrew Hodges
$22.50 Simon & Schuster
ISBN 0-671-49207-1
The timing is right: His war-time work on the Enigma has now been
de-classified. His rather open homosexuality can be discussed in other
than damning terms these days. His mother passed away in 1976. (She
maintained that his death in 1954 was not suicide, but an accident, and
she never mentioned his sexuality nor his 1952 arrest.) And, of course,
the popular press is full of stories on AI, and they always bring up the
Turing Test.
The book is 529 pages, plus photographs, some diagrams, an author's note
and extensive bibliographic footnotes.
Doug Hofstadter's review of the book will appear in the New York Times
Book Review on November 13.
--Rodney Hoffman
------------------------------
Date: Mon, 7 Nov 83 15:40:46 CST
From: Robert.S.Kelley <kelleyr.rice@Rand-Relay>
Subject: Operational definitions of intelligence
p.s. I can't imagine that psychology has no operational definition of
intelligence (in fact, what is it?). So, if worst comes to worst, AI
can just borrow psychology's definition and improve on it.
Probably the most generally accepted definition of intelligence in
psychology comes from Abraham Maslow's remark (here paraphrased) that
"Intelligence is that quality which best distinguishes such persons as
Albert Einstein and Marie Curie from the inhabitants of a home for the
mentally retarded." A poorer definition is that intelligence is what
IQ tests measure. In fact psychologists have sought without success
for a more precise definition of intelligence (or even learning) for
over 100 years.
Rusty Kelley
(kelleyr.rice@RAND-RELAY)
------------------------------
Date: 7 Nov 83 10:17:05-PST (Mon)
From: harpo!eagle!mhuxl!ulysses!unc!mcnc!ecsvax!unbent @ Ucb-Vax
Subject: Inscrutable Intelligence
Article-I.D.: ecsvax.1488
I sympathize with the longing for an "operational definition" of
'intelligence'--especially since you've got to write *something* on
grant applications to justify all those hardware costs. (That's not a
problem we philosophers have. Sigh!) But I don't see any reason to
suppose that you're ever going to *get* one, nor, in the end, that you
really *need* one.
You're probably not going to get one because "intelligence" is
one of those "open textury", "clustery" kinds of notions. That is,
we know it when we see it (most of the time), but there are no necessary and
sufficient conditions that one can give in advance which instances of it
must satisfy. (This isn't an uncommon phenomenon. As my colleague Paul Ziff
once pointed out, when we say "A cheetah can outrun a man", we can recognize
that races between men and *lame* cheetahs, *hobbled* cheetahs, *three-legged*
cheetahs, cheetahs *running on ice*, etc. don't count as counterexamples to the
claim even if the man wins--when such cases are brought up. But we can't give
an exhaustive list of spurious counterexamples *in advance*.)
Why not rest content with saying that the object of the game is to get
computers to be able to do some of the things that *we* can do--e.g.,
recognize patterns, get a high score on the Miller Analogies Test,
carry on an interesting conversation? What one would like to say, I
know, is "do some of the things we do *the way we do them*--but the
problem there is that we have no very good idea *how* we do them. Maybe
if we can get a computer to do some of them, we'll get some ideas about
us--although I'm skeptical about that, too.
--Jay Rosenberg (ecsvax!unbent)
------------------------------
Date: Tue, 8 Nov 83 09:37:00 EST
From: ihnp4!houxa!rem@UCLA-LOCUS
THE MUELLER MEASURE
If an AI could be built to answer all questions we ask it to assure us
that it is ideally human (the Turing Test), it ought to
be smart enough to figure out questions to ask itself
that would prove that it is indeed artificial. Put another
way: If an AI could make humans think it is smarter than
a human by answering all questions posed to it in a
Turing-like manner, it still is dumber than a human because
it could not ask questions of a human to make us answer
the questions so that it satisfies its desire for us to
make it think we are more artificial than it is. Again:
If we build an AI so smart it can fool other people
by answering all questions in the Turing fashion, can
we build a computer, anti-Turing-like, that could make
us answer questions to fool other machines
into believing we are artificial?
Robert E. Mueller, Bell Labs, Holmdel, New Jersey
houxa!rem
------------------------------
Date: 9 November 1983 03:41 EST
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Turing test in everyday life
. . .
I know the entity at the other end of the line is not a computer
(because they recognize my voice -- someone correct me if this is not a
good test) but we might ask: how good would a computer program have to
be to fool someone into thinking that it is human, in this limited case?
[There is a system, in use, that can recognize affirmative and negative
replies to its questions.
. . . -- KIL]
No, I always test these callers by interrupting to ask them questions,
by restating what they said to me, and by avoiding "yes/no" responses.
I appears to me that the extremely limited domain, and the utter lack of
expertise which people expect from the caller, would make it very easy to
simulate a real person. Does the fact of a limited domain "disguise"
the intelligence of the caller, or does it imply that intelligence means
a lot less in a limited domain?
-- Steve
------------------------------
End of AIList Digest
********************
∂10-Nov-83 0448 REGES@SU-SCORE.ARPA Charge limiting of student accounts
Received: from SU-SCORE by SU-AI with TCP/SMTP; 10 Nov 83 04:48:31 PST
Date: Thu 10 Nov 83 04:45:12-PST
From: Stuart Reges <REGES@SU-SCORE.ARPA>
Subject: Charge limiting of student accounts
To: students@SU-SCORE.ARPA
cc: faculty@SU-SCORE.ARPA, gotelli@SU-SCORE.ARPA
Office: Margaret Jacks 210, 497-9798
This message is intended for students whose computer accounts are paid for by
the Department and anybody else who is interested. According to the computer
usage policy drafted by Jeff Ullman last spring and adopted by the faculty this
quarter, the Department is willing to be the sponsor of last resort for any
student, but reserves the right to limit that usage. Gene Golub has instructed
me to charge limit the student SCORE accounts being charged to general
Department funds. This I have done.
Charges at SCORE are computed every night at midnight. A user who is charge
limited has a monthly allocation that he/she may not exceed. Once the user has
exceeded the limit, she/he will no longer be allowed to log in (although
because charges are calculated at midnight, a user will not be denied login
until the day after he/she exceeds her/his allocation). The allocation is
currently reset on the first day of every month. There is no accumulation.
Gene has asked me to set the allocation for full-time students at $65/month.
HCP students will receive an allocation of $20/month/course. I have set all of
the HCP students to $20, under the assumption that they are only taking one
course. If any HCP student is in fact taking more classes, he/she should
contact me to have his/her allocation increased.
SAIL and ALTO accounts are not charge limited. However, the Department is no
longer giving away these accounts automatically. Students who wish to have
SAIL and ALTO accounts will have to apply for them. A number of students
already have accounts. They will have to make an application to have those
accounts continued in the Winter Quarter. I will not be closing out these
existing accounts at the present time. I will make an announcement with
several weeks notice when I have an application form and procedure worked out.
Students who are doing projects that require more computer time that the $65
allocation allows should contact me. Special requests will be considered and
exceptions will be made for projects that seem worthwhile. The $65 figure is a
first guess. Probably some part of the town meeting with Gene on Friday should
be set aside to consideration of whether this amount is adequate. Users can
find out how much they are spending or have spent on SCORE by using the CHARGE
program. To find out about it, say HELP CHARGE on SCORE or run it:
@charge
Any students who are working on 293 projects should try, whenever possible, to
get a faculty member interested in the research to pay for their computer
usage. Students who are unable to find a sponsor for their 293 project should
talk to me. I will set up a separate account for the project with a separate
allocation from the $65/month.
-------
∂10-Nov-83 0944 @SRI-AI.ARPA:BRESNAN.PA@PARC-MAXC.ARPA Re: Newsletter No. 8, November 10, 1983
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Nov 83 09:43:54 PST
Received: from PARC-MAXC.ARPA by SRI-AI.ARPA with TCP; Thu 10 Nov 83 09:42:27-PST
Date: Thu, 10 Nov 83 09:40 PST
From: BRESNAN.PA@PARC-MAXC.ARPA
Subject: Re: Newsletter No. 8, November 10, 1983
In-reply-to: "DKANERVA@SRI-AI.ARPA's message of Wed, 9 Nov 83 18:18:34
PST"
To: DKANERVA@SRI-AI.ARPA
cc: csli-friends@SRI-AI.ARPA
Hi, Dianne,
Would you resend a copy of the latest newsletter to me? The one I
received was garbled in transmission.
Thanks--Joan
∂10-Nov-83 1058 JF@SU-SCORE.ARPA abstract for G. Kuper's talk
Received: from SU-SCORE by SU-AI with TCP/SMTP; 10 Nov 83 10:58:45 PST
Date: Thu 10 Nov 83 10:55:18-PST
From: Joan Feigenbaum <JF@SU-SCORE.ARPA>
Subject: abstract for G. Kuper's talk
To: bats@SU-SCORE.ARPA
Here is the promised abstract for Gabriel Kuper's talk at the BATS meeting
planned for 11/21 at Stanford. If you would like another copy of the other
three abstracts, please let me know.
joan
(jf@su-score)
We describe a new approach to studying the semantics of updates, in which we
treat a database as a set of statements in first-order logic. Unlike ordinary
first-order theories, we distinguish between whether a statement appears
explicitly in the set, or is just a logical consequence of statements in the
set.
We describe several approaches to updating such theories, i.e., inserting new
statements and deleting statements. In one approach, the result of the update
is a single theory that consists of disjunctions of statements in the original
theory. In the second approach, the result is a flock, i.e., a collection of
theories, each of which is a possible result of the update.
We investigate the question: when are two theories or flocks the same under all
possible updates. For the disjunction approach, and for singleton flocks we
show that a necessary and sufficient condition is that each theory cover the
other, i.e., each statement in one is a conjunction of statements in the other.
-------
∂10-Nov-83 1315 BMACKEN@SRI-AI.ARPA Transportation for Fodor and Partee
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Nov 83 13:15:20 PST
Date: Thu 10 Nov 83 13:15:34-PST
From: BMACKEN@SRI-AI.ARPA
Subject: Transportation for Fodor and Partee
To: csli-folks@SRI-AI.ARPA
Jerry Fodor and Barbara Partee will be arriving at SF International
on Thursday evening, Nov.17, at 10:10 PM on TWA flight #61. Would
anyone like to meet their flight and drive them to Palo Alto?
Let me know if you would.
Thanks.
B.
-------
∂10-Nov-83 1437 KONOLIGE@SRI-AI.ARPA Thesis orals
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Nov 83 14:37:10 PST
Date: Thu 10 Nov 83 14:33:57-PST
From: Kurt Konolige <Konolige@SRI-AI.ARPA>
Subject: Thesis orals
To: csli-friends@SRI-AI.ARPA
The room for Kurt Konolige's thesis orals on Nov. 15 is the
conference room of Building 170 (in the history corner of the Quad).
-------
∂10-Nov-83 1447 ELYSE@SU-SCORE.ARPA NSF split up
Received: from SU-SCORE by SU-AI with TCP/SMTP; 10 Nov 83 14:47:19 PST
Date: Thu 10 Nov 83 14:45:42-PST
From: Elyse Krupnick <ELYSE@SU-SCORE.ARPA>
Subject: NSF split up
To: Faculty@SU-SCORE.ARPA
Stanford-Phone: (415) 497-9746
@make(text)
From a letter by Jim Infante, Division Director of Math & Computer Sciences.
Dear Gene:
I write to you to let you know that the Director of the Foundation, Dr.
E.A. Knapp, last week agreed with the recommendation that the Division of
Mathematical and Computer Sciences be divided into two separate divisions,
one for the Mathematical Siences, the other for Computer Research. He has
authorized Dr. Marcel Bardon, Acting Assistant Director for Math and
Physical Siences, to proceed with the implementation of such a separation
and with the appropriate staffing of the two new divisions.
Enclosed please find a copy of a memorandum that describes the
recommendation to Dr. Knapp, and which will be implemented in the very
near future.
If I or members of the staff of the Division can be helpful in
ecplaining this reorganization, please do not hesitate to contact us.
Sincerely, etc.
The memorandum can be seen in my office.
Gene. -------
∂10-Nov-83 1553 BRODER@SU-SCORE.ARPA Next AFLB talk(s)
Received: from SU-SCORE by SU-AI with TCP/SMTP; 10 Nov 83 15:53:16 PST
Date: Thu 10 Nov 83 15:49:57-PST
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Next AFLB talk(s)
To: aflb.all@SU-SCORE.ARPA
cc: sharon@SU-SCORE.ARPA
Stanford-Office: MJH 325, Tel. (415) 497-1787
N E X T A F L B T A L K (S)
Ragged right WON !!!
Note that besides the regular Thursday talk, there will be an extra
AFLB, Friday, Nov. 18, at 2:15, in MJH301.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
11/17/83 - Prof. Dave Kirkpatrick (Univ. of British Columbia):
Title to be announced
******** Time and place: Nov. 17, 12:30 pm in MJ352 (Bldg. 460) *******
Special AFLB talk:
11/18/83 - Prof. P. van Emde Boas (University of Amsterdam):
Complexity classes like P and NP are well defined based on the fact
that within the family of "reasonable" machine models, each model can
simulate each other model with a polynomially bounded overhead in
time. Similarly, in order that a class like LOGSPACE is well defined,
one needs to establish that these models can simulate each other with
a constant factor overhead in space.
It seems that the standard deviation for the space measure on RAM's
with respect to this issue is not the correct one. We provide an
alternative definition which is correct, and establish that for the
case of on-line processing the two definitions indeed are different.
Our case would be much stronger if we could provide an off-line
counterexample as well, but an attempted counterexample fails to
separate tape and core. The simulation needed to recognize this
language in extremely little space on a Turing machine is based upon
an improvement with respect to space consumption of the perfect
hashing functions given by Fredman, Komlos, and Szemeredi in their
1982 FOCS paper. We show that it is possible to obtain perfect hash
functions for n-element subsets of a u-element universe requiring
space O(log(u)+n) bits for being designed, described and evaluated.
Our simulation enables us to show that the two space measures for
RAM's actually are equal for spacebounds LOG(n) or larger.
This research is joint work with C. Slot.
******** Time and place: Nov. 18, 2:15 pm in MJ301 (Bldg. 460) *******
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Regular AFLB meetings are on Thursdays, at 12:30pm, in MJ352 (Bldg.
460).
If you have a topic you would like to talk about in the AFLB seminar
please tell me. (Electronic mail: broder@su-score.arpa, Office: CSD,
Margaret Jacks Hall 325, (415) 497-1787) Contributions are wanted and
welcome. Not all time slots for the autumn quarter have been filled
so far.
For more information about future AFLB meetings and topics you might
want to look at the file [SCORE]<broder>aflb.bboard .
- Andrei Broder
-------
∂10-Nov-83 1649 GOLUB@SU-SCORE.ARPA Meeting with Bower
Received: from SU-SCORE by SU-AI with TCP/SMTP; 10 Nov 83 16:49:30 PST
Date: Thu 10 Nov 83 16:47:19-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Meeting with Bower
To: CSD-Senior-Faculty: ;
I met with Gordon Bower today. He agreed to let us submit the
Guibas papers to his office and to have us search for a chairperson.
I'll soon appoint a committee for the search.
Gordon will join us for lunch on Tuesday. GENE
-------
∂10-Nov-83 2023 CLT SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
To: "@DIS.DIS[1,CLT]"@SU-AI
SPEAKER: Yoram Moses, Stanford
TITLE: A Formal Treatment Of Ignorance
TIME: Wednesday, November 16, 4:15-5:30 PM
PLACE: Stanford Mathematics Dept. Faculty Lounge (383-N)
Abstract :
We augment well known decidable variants of (Propositional) Modal
Logics of Knowledge with a validity operator V, where Vp means that
p is valid. The next step involves an attempt to define what we may
claim to know, based on a fixed knowledge base S. An interesting
case is that we can conclude ¬Kp whenever we fail to prove Kp. In
particular, we argue for assuming the axiom ¬Kp ⊃ K¬Kp whenever
Kp ⊃ KKp is assumed. Should time allow, we shall remark on the
Unexpected Examination (Hangman's) Paradox, presenting a new approach
to discribing how to avoid the real world's "refuting logic".
This talk will consist of a presentation of research in progress.
Coming Events:
November 23, Craig Smorynski
November 30, J.E. Fenstad
∂10-Nov-83 2025 CLT MTC SEMINAR
To: "@DIS.DIS[1,CLT]"@SU-AI
SPEAKER: Lawrence Paulson, University of Cambridge
TITLE: Verifying the Unification Algorithm in LCF
TIME: Wednesday, November 16, 12noon
PLACE: Margaret Jacks Rm 352 (Stanford Computer Science Department)
Abstract:
Manna and Waldinger "1# have outlined a substantial theory of substitutions,
establishing the Unification Algorithm. All their proofs have been
formalized in the interactive theorem-prover LCF, using mainly structural
induction and rewriting. The speaker will present an overview of the problems
and results of this project, along with a detailed account of the LCF proof
that substitution is monotonic relative to the occurrence ordering.
Their theory is oriented towards Boyer and Moore's logic. LCF accepted it
with little change, though it proves theorems in Scott's logic of continuous
functions and fixed-points. Explicit reasoning about totality was added
everywhere (a nuisance), and the final well-founded induction was
reformulated as three nested structural inductions. A simpler data structure
for expressions was chosen, and methods developed to express the abstract
type for substitutions. Widespread engineering improvements in the
theorem-prover produced the new Cambridge LCF as a descendant of Edinburgh
LCF.
Some proofs require considerable user direction. A difficult proof may
result from a badly formulated theorem, the lack of suitable lemmas, or
weaknesses in LCF's automatic tools. The speaker will discuss how to
organize proofs.
"1# Z. Manna and R. Waldinger,
Deductive Synthesis of the Unification Algorithm,
Science of Computer Programming 1 (1981), pages 5-48.
∂10-Nov-83 2042 @SRI-AI.ARPA:CLT@SU-AI SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Nov 83 20:41:18 PST
Received: from SU-AI.ARPA by SRI-AI.ARPA with TCP; Thu 10 Nov 83 20:41:48-PST
Date: 10 Nov 83 2023 PST
From: Carolyn Talcott <CLT@SU-AI>
Subject: SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
To: "@DIS.DIS[1,CLT]"@SU-AI
SPEAKER: Yoram Moses, Stanford
TITLE: A Formal Treatment Of Ignorance
TIME: Wednesday, November 16, 4:15-5:30 PM
PLACE: Stanford Mathematics Dept. Faculty Lounge (383-N)
Abstract :
We augment well known decidable variants of (Propositional) Modal
Logics of Knowledge with a validity operator V, where Vp means that
p is valid. The next step involves an attempt to define what we may
claim to know, based on a fixed knowledge base S. An interesting
case is that we can conclude ¬Kp whenever we fail to prove Kp. In
particular, we argue for assuming the axiom ¬Kp ⊃ K¬Kp whenever
Kp ⊃ KKp is assumed. Should time allow, we shall remark on the
Unexpected Examination (Hangman's) Paradox, presenting a new approach
to discribing how to avoid the real world's "refuting logic".
This talk will consist of a presentation of research in progress.
Coming Events:
November 23, Craig Smorynski
November 30, J.E. Fenstad
∂10-Nov-83 2042 @SRI-AI.ARPA:CLT@SU-AI MTC SEMINAR
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Nov 83 20:36:20 PST
Received: from SU-AI.ARPA by SRI-AI.ARPA with TCP; Thu 10 Nov 83 20:37:19-PST
Date: 10 Nov 83 2025 PST
From: Carolyn Talcott <CLT@SU-AI>
Subject: MTC SEMINAR
To: "@DIS.DIS[1,CLT]"@SU-AI
SPEAKER: Lawrence Paulson, University of Cambridge
TITLE: Verifying the Unification Algorithm in LCF
TIME: Wednesday, November 16, 12noon
PLACE: Margaret Jacks Rm 352 (Stanford Computer Science Department)
Abstract:
Manna and Waldinger "1# have outlined a substantial theory of substitutions,
establishing the Unification Algorithm. All their proofs have been
formalized in the interactive theorem-prover LCF, using mainly structural
induction and rewriting. The speaker will present an overview of the problems
and results of this project, along with a detailed account of the LCF proof
that substitution is monotonic relative to the occurrence ordering.
Their theory is oriented towards Boyer and Moore's logic. LCF accepted it
with little change, though it proves theorems in Scott's logic of continuous
functions and fixed-points. Explicit reasoning about totality was added
everywhere (a nuisance), and the final well-founded induction was
reformulated as three nested structural inductions. A simpler data structure
for expressions was chosen, and methods developed to express the abstract
type for substitutions. Widespread engineering improvements in the
theorem-prover produced the new Cambridge LCF as a descendant of Edinburgh
LCF.
Some proofs require considerable user direction. A difficult proof may
result from a badly formulated theorem, the lack of suitable lemmas, or
weaknesses in LCF's automatic tools. The speaker will discuss how to
organize proofs.
"1# Z. Manna and R. Waldinger,
Deductive Synthesis of the Unification Algorithm,
Science of Computer Programming 1 (1981), pages 5-48.